You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2020/04/27 13:19:56 UTC
[GitHub] [incubator-hudi] tieke1121 opened a new issue #1568: [SUPPORT]
tieke1121 opened a new issue #1568:
URL: https://github.com/apache/incubator-hudi/issues/1568
**_Tips before filing an issue_**
- Have you gone through our [FAQs](https://cwiki.apache.org/confluence/display/HUDI/FAQ)?
- Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apache.org.
- If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
**Describe the problem you faced**
A clear and concise description of the problem.
**To Reproduce**
Steps to reproduce the behavior:
1.spark structured streaming ingestion data to hudi and sync the data to hive with no partitioned table
2. use beeline to query data from hive table ,
3. Caused by: java.io.FileNotFoundException: File does not exist
4.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Environment Description**
* Hudi version :
0.5.2
* Spark version :
2.4.0
* Hive version :
2.2.1
* Hadoop version :
3.0.0
* Storage (HDFS/S3/GCS..) :
HDFS
* Running on Docker? (yes/no) :
no
**Additional context**
Add any other context about the problem here.
**Stacktrace**
```Add the stacktrace of the error.```
javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:257) ... 11 more Caused by: java.io.FileNotFoundException: File does not exist: hdfs://namenode1:8020/data/device/status/data/default/c48a77fc-fd20-4b04-ab42-97c7ad0b1791-0_0-2008-515972_20200427130146.parquet at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1500) at
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [incubator-hudi] tieke1121 commented on issue #1568: [SUPPORT] java.lang.reflect.InvocationTargetException when upsert
Posted by GitBox <gi...@apache.org>.
tieke1121 commented on issue #1568:
URL: https://github.com/apache/incubator-hudi/issues/1568#issuecomment-620324107
I've set it up
dataFrame.writeStream
.format("org.apache.hudi")
.option("path", conf.getString("hudi.basePath"))
.option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, conf.getString("hudi.recordkey"))
.option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, conf.getString("hudi.precombineKey"))
.option(HoodieWriteConfig.TABLE_NAME, conf.getString("hudi.tableName"))
.option("checkpointLocation", conf.getString("hudi.checkpoinPath"))
.option(DataSourceWriteOptions.HIVE_DATABASE_OPT_KEY, conf.getString("hive.database"))
.option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, conf.getString("hive.table"))
.option(DataSourceWriteOptions.HIVE_URL_OPT_KEY, conf.getString("hive.url"))
.option(DataSourceWriteOptions.HIVE_USER_OPT_KEY, conf.getString("hive.username"))
.option(DataSourceWriteOptions.HIVE_PASS_OPT_KEY, conf.getString("hive.password"))
.option(DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY, "true")
.option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY,classOf[NonpartitionedKeyGenerator].getCanonicalName)
.option(DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY, classOf[NonPartitionedExtractor].getCanonicalName)
.outputMode(OutputMode.Append())
.start()
and the hdfs path is :
-rw-r--r-- 3 root supergroup 737196 2020-04-28 01:16 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-1068-281898_20200428011554.parquet
-rw-r--r-- 3 root supergroup 745158 2020-04-28 01:16 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-1120-295461_20200428011603.parquet
-rw-r--r-- 3 root supergroup 750006 2020-04-28 01:16 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-1168-309014_20200428011613.parquet
-rw-r--r-- 3 root supergroup 755947 2020-04-28 01:16 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-1217-322579_20200428011624.parquet
-rw-r--r-- 3 root supergroup 765879 2020-04-28 01:16 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-1267-336149_20200428011634.parquet
-rw-r--r-- 3 root supergroup 690225 2020-04-28 01:14 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-770-200500_20200428011449.parquet
-rw-r--r-- 3 root supergroup 698213 2020-04-28 01:15 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-819-214064_20200428011500.parquet
-rw-r--r-- 3 root supergroup 705870 2020-04-28 01:15 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-870-227637_20200428011511.parquet
-rw-r--r-- 3 root supergroup 713830 2020-04-28 01:15 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-918-241200_20200428011521.parquet
-rw-r--r-- 3 root supergroup 720687 2020-04-28 01:15 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-967-254767_20200428011532.parquet
when i quey simple hive sql : select deviceid from device_status_hudi_1;
the query is ok
but I use the complex hive SQL : select deviceid from device_status_hudi_1 group by deviceid having count(deviceid)>1;
Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)
2020-04-28 01:29:16,698 INFO [main] org.apache.hadoop.hive.conf.HiveConf: Found configuration file null
2020-04-28 01:29:16,880 INFO [main] org.apache.hadoop.hive.ql.exec.SerializationUtilities: Deserializing MapWork using kryo
2020-04-28 01:29:17,045 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:271)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.<init>(HadoopShimsSecure.java:217)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:345)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:702)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<init>(MapTask.java:175)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:444)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:257)
... 11 more
Caused by: java.io.FileNotFoundException: File does not exist: hdfs://sap-namenode1:8020/wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-4288-1150066_20200428012704.parquet
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1500)
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1493)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1508)
at org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:39)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:413)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:400)
at org.apache.hadoop.hive.ql.io.parquet.ParquetRecordReaderBase.getSplit(ParquetRecordReaderBase.java:79)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:78)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:63)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:75)
at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReader(HoodieParquetInputFormat.java:297)
at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:68)
... 16 more
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [incubator-hudi] tieke1121 edited a comment on issue #1568: [SUPPORT] java.lang.reflect.InvocationTargetException when upsert
Posted by GitBox <gi...@apache.org>.
tieke1121 edited a comment on issue #1568:
URL: https://github.com/apache/incubator-hudi/issues/1568#issuecomment-620324107
I've set it up
```
dataFrame.writeStream
.format("org.apache.hudi")
.option("path", conf.getString("hudi.basePath"))
.option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, conf.getString("hudi.recordkey"))
.option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, conf.getString("hudi.precombineKey"))
.option(HoodieWriteConfig.TABLE_NAME, conf.getString("hudi.tableName"))
.option("checkpointLocation", conf.getString("hudi.checkpoinPath"))
.option(DataSourceWriteOptions.HIVE_DATABASE_OPT_KEY, conf.getString("hive.database"))
.option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, conf.getString("hive.table"))
.option(DataSourceWriteOptions.HIVE_URL_OPT_KEY, conf.getString("hive.url"))
.option(DataSourceWriteOptions.HIVE_USER_OPT_KEY, conf.getString("hive.username"))
.option(DataSourceWriteOptions.HIVE_PASS_OPT_KEY, conf.getString("hive.password"))
.option(DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY, "true")
.option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY,classOf[NonpartitionedKeyGenerator].getCanonicalName)
.option(DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY, classOf[NonPartitionedExtractor].getCanonicalName)
.outputMode(OutputMode.Append())
.start()
and the hdfs path is :
-rw-r--r-- 3 root supergroup 737196 2020-04-28 01:16 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-1068-281898_20200428011554.parquet
-rw-r--r-- 3 root supergroup 745158 2020-04-28 01:16 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-1120-295461_20200428011603.parquet
-rw-r--r-- 3 root supergroup 750006 2020-04-28 01:16 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-1168-309014_20200428011613.parquet
-rw-r--r-- 3 root supergroup 755947 2020-04-28 01:16 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-1217-322579_20200428011624.parquet
-rw-r--r-- 3 root supergroup 765879 2020-04-28 01:16 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-1267-336149_20200428011634.parquet
-rw-r--r-- 3 root supergroup 690225 2020-04-28 01:14 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-770-200500_20200428011449.parquet
-rw-r--r-- 3 root supergroup 698213 2020-04-28 01:15 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-819-214064_20200428011500.parquet
-rw-r--r-- 3 root supergroup 705870 2020-04-28 01:15 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-870-227637_20200428011511.parquet
-rw-r--r-- 3 root supergroup 713830 2020-04-28 01:15 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-918-241200_20200428011521.parquet
-rw-r--r-- 3 root supergroup 720687 2020-04-28 01:15 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-967-254767_20200428011532.parquet
when i quey simple hive sql : select deviceid from device_status_hudi_1;
the query is ok
but I use the complex hive SQL : select deviceid from device_status_hudi_1 group by deviceid having count(deviceid)>1;
Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)
```
2020-04-28 01:29:16,698 INFO [main] org.apache.hadoop.hive.conf.HiveConf: Found configuration file null
2020-04-28 01:29:16,880 INFO [main] org.apache.hadoop.hive.ql.exec.SerializationUtilities: Deserializing MapWork using kryo
2020-04-28 01:29:17,045 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:271)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.<init>(HadoopShimsSecure.java:217)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:345)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:702)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<init>(MapTask.java:175)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:444)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:257)
... 11 more
Caused by: java.io.FileNotFoundException: File does not exist: hdfs://sap-namenode1:8020/wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-4288-1150066_20200428012704.parquet
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1500)
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1493)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1508)
at org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:39)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:413)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:400)
at org.apache.hadoop.hive.ql.io.parquet.ParquetRecordReaderBase.getSplit(ParquetRecordReaderBase.java:79)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:78)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:63)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:75)
at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReader(HoodieParquetInputFormat.java:297)
at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:68)
... 16 more
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [incubator-hudi] bhasudha commented on issue #1568: [SUPPORT] java.lang.reflect.InvocationTargetException when upsert
Posted by GitBox <gi...@apache.org>.
bhasudha commented on issue #1568:
URL: https://github.com/apache/incubator-hudi/issues/1568#issuecomment-620407768
@tieke1121 are you setting these configs
```
--hiveconf hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat \
--hiveconf hive.stats.autogather=false
```
when using beeline ?
For example - https://hudi.apache.org/docs/docker_demo.html#step-4-a-run-hive-queries
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [incubator-hudi] bvaradar commented on issue #1568: [SUPPORT] java.lang.reflect.InvocationTargetException when upsert
Posted by GitBox <gi...@apache.org>.
bvaradar commented on issue #1568:
URL: https://github.com/apache/incubator-hudi/issues/1568#issuecomment-620292854
@tieke1121 : For non-partitioned table, you would need to use NonpartitionedKeyGenerator as key-generator class. Can you set it up and give it a shot.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [incubator-hudi] tieke1121 commented on issue #1568: [SUPPORT]
Posted by GitBox <gi...@apache.org>.
tieke1121 commented on issue #1568:
URL: https://github.com/apache/incubator-hudi/issues/1568#issuecomment-619981921
spark structured streaming code :
dataFrame.writeStream
.format("org.apache.hudi")
.option("path", conf.getString("hudi.basePath"))
.option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, conf.getString("hudi.recordkey"))
.option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, conf.getString("hudi.precombineKey"))
.option(HoodieWriteConfig.TABLE_NAME, conf.getString("hudi.tableName"))
.option("checkpointLocation", conf.getString("hudi.checkpoinPath"))
.option(DataSourceWriteOptions.HIVE_DATABASE_OPT_KEY, conf.getString("hive.database"))
.option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, conf.getString("hive.table"))
.option(DataSourceWriteOptions.HIVE_URL_OPT_KEY, conf.getString("hive.url"))
.option(DataSourceWriteOptions.HIVE_USER_OPT_KEY, conf.getString("hive.username"))
.option(DataSourceWriteOptions.HIVE_PASS_OPT_KEY, conf.getString("hive.password"))
.option(DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY, "true")
.option(DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY, classOf[NonPartitionedExtractor].getCanonicalName)
.outputMode(OutputMode.Append())
.start()
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [incubator-hudi] tieke1121 commented on issue #1568: [SUPPORT] java.lang.reflect.InvocationTargetException when upsert
Posted by GitBox <gi...@apache.org>.
tieke1121 commented on issue #1568:
URL: https://github.com/apache/incubator-hudi/issues/1568#issuecomment-620330335
```
1 there is no 'default' directory after i set the DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY
2
hadoop fs -ls /wap-olap/data/device/status/data_1/.hoodie
Found 138 items
drwxr-xr-x - root supergroup 0 2020-04-28 01:11 /wap-olap/data/device/status/data_1/.hoodie/.aux
drwxr-xr-x - root supergroup 0 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/.temp
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:48 /wap-olap/data/device/status/data_1/.hoodie/20200428014848.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:48 /wap-olap/data/device/status/data_1/.hoodie/20200428014848.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:48 /wap-olap/data/device/status/data_1/.hoodie/20200428014848.clean.requested
-rw-r--r-- 3 root supergroup 5464 2020-04-28 01:48 /wap-olap/data/device/status/data_1/.hoodie/20200428014848.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:48 /wap-olap/data/device/status/data_1/.hoodie/20200428014848.commit.requested
-rw-r--r-- 3 root supergroup 1054 2020-04-28 01:48 /wap-olap/data/device/status/data_1/.hoodie/20200428014848.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014858.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014858.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014858.clean.requested
-rw-r--r-- 3 root supergroup 5464 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014858.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:48 /wap-olap/data/device/status/data_1/.hoodie/20200428014858.commit.requested
-rw-r--r-- 3 root supergroup 1054 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014858.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014909.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014909.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014909.clean.requested
-rw-r--r-- 3 root supergroup 5463 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014909.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014909.commit.requested
-rw-r--r-- 3 root supergroup 1054 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014909.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014919.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014919.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014919.clean.requested
-rw-r--r-- 3 root supergroup 5464 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014919.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014919.commit.requested
-rw-r--r-- 3 root supergroup 1055 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014919.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014930.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014930.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014930.clean.requested
-rw-r--r-- 3 root supergroup 5464 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014930.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014930.commit.requested
-rw-r--r-- 3 root supergroup 1054 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014930.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014940.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014940.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014940.clean.requested
-rw-r--r-- 3 root supergroup 5463 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014940.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014940.commit.requested
-rw-r--r-- 3 root supergroup 1054 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014940.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428014950.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428014950.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428014950.clean.requested
-rw-r--r-- 3 root supergroup 5463 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014950.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014950.commit.requested
-rw-r--r-- 3 root supergroup 1054 2020-04-28 01:49 /wap-olap/data/device/status/data_1/.hoodie/20200428014950.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015001.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015001.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015001.clean.requested
-rw-r--r-- 3 root supergroup 5463 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015001.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015001.commit.requested
-rw-r--r-- 3 root supergroup 1054 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015001.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015011.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015011.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015011.clean.requested
-rw-r--r-- 3 root supergroup 5463 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015011.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015011.commit.requested
-rw-r--r-- 3 root supergroup 1054 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015011.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015022.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015022.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015022.clean.requested
-rw-r--r-- 3 root supergroup 5464 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015022.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015022.commit.requested
-rw-r--r-- 3 root supergroup 1054 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015022.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015033.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015033.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015033.clean.requested
-rw-r--r-- 3 root supergroup 5463 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015033.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015033.commit.requested
-rw-r--r-- 3 root supergroup 1054 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015033.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015044.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015044.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015044.clean.requested
-rw-r--r-- 3 root supergroup 5463 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015044.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015044.commit.requested
-rw-r--r-- 3 root supergroup 1054 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015044.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015054.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015054.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015054.clean.requested
-rw-r--r-- 3 root supergroup 5464 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015054.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015054.commit.requested
-rw-r--r-- 3 root supergroup 1055 2020-04-28 01:50 /wap-olap/data/device/status/data_1/.hoodie/20200428015054.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015105.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015105.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015105.clean.requested
-rw-r--r-- 3 root supergroup 5465 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015105.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015105.commit.requested
-rw-r--r-- 3 root supergroup 1055 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015105.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015116.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015116.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015116.clean.requested
-rw-r--r-- 3 root supergroup 5464 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015116.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015116.commit.requested
-rw-r--r-- 3 root supergroup 1055 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015116.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015127.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015127.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015127.clean.requested
-rw-r--r-- 3 root supergroup 5464 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015127.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015127.commit.requested
-rw-r--r-- 3 root supergroup 1055 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015127.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015138.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015138.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015138.clean.requested
-rw-r--r-- 3 root supergroup 5464 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015138.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015138.commit.requested
-rw-r--r-- 3 root supergroup 1055 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015138.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015149.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015149.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015149.clean.requested
-rw-r--r-- 3 root supergroup 5465 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015149.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015149.commit.requested
-rw-r--r-- 3 root supergroup 1055 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015149.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015159.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015159.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015159.clean.requested
-rw-r--r-- 3 root supergroup 5464 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015159.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:51 /wap-olap/data/device/status/data_1/.hoodie/20200428015159.commit.requested
-rw-r--r-- 3 root supergroup 1055 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015159.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015210.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015210.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015210.clean.requested
-rw-r--r-- 3 root supergroup 5465 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015210.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015210.commit.requested
-rw-r--r-- 3 root supergroup 1055 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015210.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015221.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015221.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015221.clean.requested
-rw-r--r-- 3 root supergroup 5464 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015221.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015221.commit.requested
-rw-r--r-- 3 root supergroup 1054 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015221.inflight
-rw-r--r-- 3 root supergroup 1305 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015233.clean
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015233.clean.inflight
-rw-r--r-- 3 root supergroup 948 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015233.clean.requested
-rw-r--r-- 3 root supergroup 5465 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015233.commit
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015233.commit.requested
-rw-r--r-- 3 root supergroup 1055 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015233.inflight
-rw-r--r-- 3 root supergroup 0 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015243.commit.requested
-rw-r--r-- 3 root supergroup 1055 2020-04-28 01:52 /wap-olap/data/device/status/data_1/.hoodie/20200428015243.inflight
drwxr-xr-x - root supergroup 0 2020-04-28 01:17 /wap-olap/data/device/status/data_1/.hoodie/archived
-rw-r--r-- 3 root supergroup 218 2020-04-28 01:11 /wap-olap/data/device/status/data_1/.hoodie/hoodie.properties
3 [root@sap-datanode3 device]# hadoop fs -ls /wap-olap/data/device/status/data_1/
Found 13 items
drwxr-xr-x - root supergroup 0 2020-04-28 01:53 /wap-olap/data/device/status/data_1/.hoodie
-rw-r--r-- 3 root supergroup 93 2020-04-28 01:12 /wap-olap/data/device/status/data_1/.hoodie_partition_metadata
-rw-r--r-- 3 root supergroup 1596884 2020-04-28 01:52 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-11621-3090139_20200428015159.parquet
-rw-r--r-- 3 root supergroup 1599758 2020-04-28 01:52 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-11674-3103702_20200428015210.parquet
-rw-r--r-- 3 root supergroup 1602477 2020-04-28 01:52 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-11726-3117264_20200428015221.parquet
-rw-r--r-- 3 root supergroup 1604964 2020-04-28 01:52 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-11780-3130832_20200428015233.parquet
-rw-r--r-- 3 root supergroup 1606335 2020-04-28 01:52 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-11831-3144395_20200428015243.parquet
-rw-r--r-- 3 root supergroup 1609249 2020-04-28 01:53 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-11880-3157966_20200428015254.parquet
-rw-r--r-- 3 root supergroup 1611447 2020-04-28 01:53 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-11931-3171531_20200428015305.parquet
-rw-r--r-- 3 root supergroup 1620704 2020-04-28 01:53 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-11979-3185103_20200428015315.parquet
-rw-r--r-- 3 root supergroup 1628150 2020-04-28 01:53 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-12029-3198670_20200428015324.parquet
-rw-r--r-- 3 root supergroup 1631618 2020-04-28 01:53 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-12083-3212239_20200428015336.parquet
-rw-r--r-- 3 root supergroup 1632733 2020-04-28 01:53 /wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-12134-3225805_20200428015346.parquet
error :
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:257) ... 11 more Caused by: java.io.FileNotFoundException: File does not exist: hdfs://sap-namenode1:8020/wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-11469-3049440_20200428015127.parquet at
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [incubator-hudi] bvaradar commented on issue #1568: [SUPPORT] java.lang.reflect.InvocationTargetException when upsert
Posted by GitBox <gi...@apache.org>.
bvaradar commented on issue #1568:
URL: https://github.com/apache/incubator-hudi/issues/1568#issuecomment-620328437
@tieke1121 : Is there a "default" directory under /wap-olap/data/device/status/data_1
Can you list /wap-olap/data/device/status/data_1/.hoodie folder and show the output here. Also, can you cat /wap-olap/data/device/status/data_1/.hoodie/20200428012704.xxxx file. Also provide the listing for /wap-olap/data/device/status/data_1/ after the query failed.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [incubator-hudi] lamber-ken commented on issue #1568: [SUPPORT] java.lang.reflect.InvocationTargetException when upsert
Posted by GitBox <gi...@apache.org>.
lamber-ken commented on issue #1568:
URL: https://github.com/apache/incubator-hudi/issues/1568#issuecomment-620325288
hello @tieke1121, you can use ` ``` ` to wrap stackstrace, e.g
```
Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)
2020-04-28 01:29:16,698 INFO [main] org.apache.hadoop.hive.conf.HiveConf: Found configuration file null
2020-04-28 01:29:16,880 INFO [main] org.apache.hadoop.hive.ql.exec.SerializationUtilities: Deserializing MapWork using kryo
2020-04-28 01:29:17,045 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:271)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.(HadoopShimsSecure.java:217)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:345)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:702)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:175)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:444)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:257)
... 11 more
Caused by: java.io.FileNotFoundException: File does not exist: hdfs://sap-namenode1:8020/wap-olap/data/device/status/data_1/b33868cc-6609-47a3-8e93-bdd248deb21e-0_0-4288-1150066_20200428012704.parquet
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1500)
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1493)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1508)
at org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:39)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:413)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:400)
at org.apache.hadoop.hive.ql.io.parquet.ParquetRecordReaderBase.getSplit(ParquetRecordReaderBase.java:79)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:78)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:63)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:75)
at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReader(HoodieParquetInputFormat.java:297)
at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:68)
... 16 more
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [incubator-hudi] lamber-ken edited a comment on issue #1568: [SUPPORT] java.lang.reflect.InvocationTargetException when upsert
Posted by GitBox <gi...@apache.org>.
lamber-ken edited a comment on issue #1568:
URL: https://github.com/apache/incubator-hudi/issues/1568#issuecomment-620325288
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [incubator-hudi] lamber-ken edited a comment on issue #1568: [SUPPORT] java.lang.reflect.InvocationTargetException when upsert
Posted by GitBox <gi...@apache.org>.
lamber-ken edited a comment on issue #1568:
URL: https://github.com/apache/incubator-hudi/issues/1568#issuecomment-620325288
hello @tieke1121, you can use ` ``` ` to wrap stackstrace, e.g
```
Exception in thread "main" java.lang.ArithmeticException: / by zero
at com.huahuiyang.channel.Test.main(Test.java:11)
```
![image](https://user-images.githubusercontent.com/20113411/80437430-1b724600-8934-11ea-9231-8a46ea48e05a.png)
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org