You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2021/04/27 07:11:56 UTC

[GitHub] [iceberg] openinx opened a new issue #2525: The spark's remove_orphan_files procedure cannot expire the orphan files that located in remote object storage services

openinx opened a new issue #2525:
URL: https://github.com/apache/iceberg/issues/2525


   ```
   spark-sql> CALL dlf_catalog.system.remove_orphan_files(table=>'dlf_db.sample', dry_run => true);
   
   21/04/27 15:08:04 INFO BaseMetastoreTableOperations: Refreshing table metadata from new version: oss://iceberg-test/warehouse/dlf_db.db/sample/metadata/00423-a163e568-2b7d-4851-ab94-1ef94538bc60.metadata.json
   21/04/27 15:08:05 INFO BaseMetastoreCatalog: Table loaded by catalog: dlf_catalog.dlf_db.sample
   21/04/27 15:08:05 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1642.5 KiB, free 364.7 MiB)
   21/04/27 15:08:05 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 972.0 B, free 364.7 MiB)
   21/04/27 15:08:05 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 30.225.0.89:59153 (size: 972.0 B, free: 366.3 MiB)
   21/04/27 15:08:05 INFO SparkContext: Created broadcast 0 from broadcast at BaseSparkAction.java:154
   21/04/27 15:08:05 INFO CodeGenerator: Code generated in 11.895783 ms
   21/04/27 15:08:05 ERROR SparkSQLDriver: Failed in [CALL dlf_catalog.system.remove_orphan_files(table=>'dlf_db.sample', dry_run => true)]
   org.apache.iceberg.exceptions.RuntimeIOException: java.io.IOException: No FileSystem for scheme: oss
   	at org.apache.iceberg.spark.actions.BaseRemoveOrphanFilesSparkAction.listDirRecursively(BaseRemoveOrphanFilesSparkAction.java:230)
   	at org.apache.iceberg.spark.actions.BaseRemoveOrphanFilesSparkAction.buildActualFileDF(BaseRemoveOrphanFilesSparkAction.java:178)
   	at org.apache.iceberg.spark.actions.BaseRemoveOrphanFilesSparkAction.doExecute(BaseRemoveOrphanFilesSparkAction.java:151)
   	at org.apache.iceberg.spark.actions.BaseSparkAction.withJobGroupInfo(BaseSparkAction.java:102)
   	at org.apache.iceberg.spark.actions.BaseRemoveOrphanFilesSparkAction.execute(BaseRemoveOrphanFilesSparkAction.java:144)
   	at org.apache.iceberg.spark.actions.BaseRemoveOrphanFilesSparkAction.execute(BaseRemoveOrphanFilesSparkAction.java:77)
   	at org.apache.iceberg.spark.procedures.RemoveOrphanFilesProcedure.lambda$call$1(RemoveOrphanFilesProcedure.java:105)
   	at org.apache.iceberg.spark.procedures.BaseProcedure.execute(BaseProcedure.java:85)
   	at org.apache.iceberg.spark.procedures.BaseProcedure.withIcebergTable(BaseProcedure.java:78)
   	at org.apache.iceberg.spark.procedures.RemoveOrphanFilesProcedure.call(RemoveOrphanFilesProcedure.java:86)
   	at org.apache.spark.sql.execution.datasources.v2.CallExec.run(CallExec.scala:33)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:39)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:39)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:45)
   	at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
   	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
   	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
   	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
   	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
   	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
   	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
   	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
   	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:607)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
   	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:602)
   	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:377)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:496)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:490)
   	at scala.collection.Iterator.foreach(Iterator.scala:941)
   	at scala.collection.Iterator.foreach$(Iterator.scala:941)
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
   	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
   	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:490)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:282)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
   	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
   	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
   	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
   	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
   	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
   	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
   	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   Caused by: java.io.IOException: No FileSystem for scheme: oss
   	at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
   	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
   	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
   	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
   	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
   	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
   	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
   	at org.apache.iceberg.spark.actions.BaseRemoveOrphanFilesSparkAction.listDirRecursively(BaseRemoveOrphanFilesSparkAction.java:208)
   	... 54 more
   org.apache.iceberg.exceptions.RuntimeIOException: java.io.IOException: No FileSystem for scheme: oss
   	at org.apache.iceberg.spark.actions.BaseRemoveOrphanFilesSparkAction.listDirRecursively(BaseRemoveOrphanFilesSparkAction.java:230)
   	at org.apache.iceberg.spark.actions.BaseRemoveOrphanFilesSparkAction.buildActualFileDF(BaseRemoveOrphanFilesSparkAction.java:178)
   	at org.apache.iceberg.spark.actions.BaseRemoveOrphanFilesSparkAction.doExecute(BaseRemoveOrphanFilesSparkAction.java:151)
   	at org.apache.iceberg.spark.actions.BaseSparkAction.withJobGroupInfo(BaseSparkAction.java:102)
   	at org.apache.iceberg.spark.actions.BaseRemoveOrphanFilesSparkAction.execute(BaseRemoveOrphanFilesSparkAction.java:144)
   	at org.apache.iceberg.spark.actions.BaseRemoveOrphanFilesSparkAction.execute(BaseRemoveOrphanFilesSparkAction.java:77)
   	at org.apache.iceberg.spark.procedures.RemoveOrphanFilesProcedure.lambda$call$1(RemoveOrphanFilesProcedure.java:105)
   	at org.apache.iceberg.spark.procedures.BaseProcedure.execute(BaseProcedure.java:85)
   	at org.apache.iceberg.spark.procedures.BaseProcedure.withIcebergTable(BaseProcedure.java:78)
   	at org.apache.iceberg.spark.procedures.RemoveOrphanFilesProcedure.call(RemoveOrphanFilesProcedure.java:86)
   	at org.apache.spark.sql.execution.datasources.v2.CallExec.run(CallExec.scala:33)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:39)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:39)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:45)
   	at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
   	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
   	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
   	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
   	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
   	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
   	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
   	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
   	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:607)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
   	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:602)
   	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:377)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:496)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:490)
   	at scala.collection.Iterator.foreach(Iterator.scala:941)
   	at scala.collection.Iterator.foreach$(Iterator.scala:941)
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
   	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
   	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:490)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:282)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
   	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
   	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
   	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
   	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
   	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
   	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
   	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   Caused by: java.io.IOException: No FileSystem for scheme: oss
   	at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
   	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
   	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
   	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
   	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
   	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
   	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
   	at org.apache.iceberg.spark.actions.BaseRemoveOrphanFilesSparkAction.listDirRecursively(BaseRemoveOrphanFilesSparkAction.java:208)
   	... 54 more
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] openinx commented on issue #2525: The spark's remove_orphan_files procedure cannot expire the orphan files that located in remote object storage services

Posted by GitBox <gi...@apache.org>.
openinx commented on issue #2525:
URL: https://github.com/apache/iceberg/issues/2525#issuecomment-827375959


   Looks like the current `remove_orphan_files` procedure are depending on the [hadoop fs](https://github.com/apache/iceberg/blob/1f77257fc24891f3e66a6c162ac239afd6ae8d72/spark/src/main/java/org/apache/iceberg/spark/actions/BaseRemoveOrphanFilesSparkAction.java#L178) to get all the files,  while if we did not add the `oss` hadoop fs implementation, then the spark could not remove those orphan files.  Then for the people that want to put their data on the cloud storage services,  they have to implement both the `FileIO` and `org.apache.hadoop.fs.FileSystem` interfaces,  seems much redundant.   Is it possible to add a  `list` interfaces in the `FileIO` classes and it is only allowed to used for data management procedures  (it isn't allowed to used for table read/write path) ?  In this way, we could remove the hadoop fs dependency I think.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] openinx commented on issue #2525: The spark's remove_orphan_files procedure cannot expire the orphan files that located in remote object storage services

Posted by GitBox <gi...@apache.org>.
openinx commented on issue #2525:
URL: https://github.com/apache/iceberg/issues/2525#issuecomment-827410148


   The expire snapshots procedure also need hadoop fs: 
   
   ```
   spark-sql> CALL dlf_catalog.system.expire_snapshots(table => 'dlf_db.sample', older_than => 1619510640306, retain_last => 5);
   
   org.apache.iceberg.exceptions.RuntimeIOException: Failed to get file system for path: oss://iceberg-test/warehouse/dlf_db.db/sample/metadata/00424-85a395fe-5385-4fb1-a3c2-5d45c4db2aac.metadata.json
   	at org.apache.iceberg.hadoop.Util.getFs(Util.java:50)
   	at org.apache.iceberg.hadoop.HadoopInputFile.fromLocation(HadoopInputFile.java:54)
   	at org.apache.iceberg.hadoop.HadoopFileIO.newInputFile(HadoopFileIO.java:59)
   	at org.apache.iceberg.TableMetadataParser.read(TableMetadataParser.java:252)
   	at org.apache.iceberg.StaticTableOperations.current(StaticTableOperations.java:53)
   	at org.apache.iceberg.hadoop.HadoopTables.loadMetadataTable(HadoopTables.java:122)
   	at org.apache.iceberg.hadoop.HadoopTables.load(HadoopTables.java:82)
   	at org.apache.iceberg.spark.SparkCatalog.load(SparkCatalog.java:454)
   	at org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:116)
   	at org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:79)
   	at org.apache.spark.sql.DataFrameReader.$anonfun$load$1(DataFrameReader.scala:271)
   	at scala.Option.map(Option.scala:230)
   	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:248)
   	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:232)
   	at org.apache.iceberg.spark.actions.BaseSparkAction.loadMetadataTable(BaseSparkAction.java:205)
   	at org.apache.iceberg.spark.actions.BaseSparkAction.buildValidDataFileDF(BaseSparkAction.java:156)
   	at org.apache.iceberg.spark.actions.BaseExpireSnapshotsSparkAction.buildValidFileDF(BaseExpireSnapshotsSparkAction.java:203)
   	at org.apache.iceberg.spark.actions.BaseExpireSnapshotsSparkAction.expire(BaseExpireSnapshotsSparkAction.java:154)
   	at org.apache.iceberg.spark.actions.BaseExpireSnapshotsSparkAction.doExecute(BaseExpireSnapshotsSparkAction.java:193)
   	at org.apache.iceberg.spark.actions.BaseSparkAction.withJobGroupInfo(BaseSparkAction.java:102)
   	at org.apache.iceberg.spark.actions.BaseExpireSnapshotsSparkAction.execute(BaseExpireSnapshotsSparkAction.java:185)
   	at org.apache.iceberg.spark.actions.BaseExpireSnapshotsSparkAction.execute(BaseExpireSnapshotsSparkAction.java:65)
   	at org.apache.iceberg.spark.procedures.ExpireSnapshotsProcedure.lambda$call$0(ExpireSnapshotsProcedure.java:94)
   	at org.apache.iceberg.spark.procedures.BaseProcedure.execute(BaseProcedure.java:85)
   	at org.apache.iceberg.spark.procedures.BaseProcedure.modifyIcebergTable(BaseProcedure.java:74)
   	at org.apache.iceberg.spark.procedures.ExpireSnapshotsProcedure.call(ExpireSnapshotsProcedure.java:83)
   	at org.apache.spark.sql.execution.datasources.v2.CallExec.run(CallExec.scala:33)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:39)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:39)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:45)
   	at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
   	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
   	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
   	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
   	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
   	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
   	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
   	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
   	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:607)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
   	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:602)
   	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:377)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:496)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:490)
   	at scala.collection.Iterator.foreach(Iterator.scala:941)
   	at scala.collection.Iterator.foreach$(Iterator.scala:941)
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
   	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
   	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:490)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:282)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
   	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
   	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
   	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
   	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
   	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
   	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
   	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   Caused by: java.io.IOException: No FileSystem for scheme: oss
   	at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
   	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
   	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
   	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
   	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
   	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
   	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
   	at org.apache.iceberg.hadoop.Util.getFs(Util.java:48)
   	... 70 more
   org.apache.iceberg.exceptions.RuntimeIOException: Failed to get file system for path: oss://iceberg-test/warehouse/dlf_db.db/sample/metadata/00424-85a395fe-5385-4fb1-a3c2-5d45c4db2aac.metadata.json
   	at org.apache.iceberg.hadoop.Util.getFs(Util.java:50)
   	at org.apache.iceberg.hadoop.HadoopInputFile.fromLocation(HadoopInputFile.java:54)
   	at org.apache.iceberg.hadoop.HadoopFileIO.newInputFile(HadoopFileIO.java:59)
   	at org.apache.iceberg.TableMetadataParser.read(TableMetadataParser.java:252)
   	at org.apache.iceberg.StaticTableOperations.current(StaticTableOperations.java:53)
   	at org.apache.iceberg.hadoop.HadoopTables.loadMetadataTable(HadoopTables.java:122)
   	at org.apache.iceberg.hadoop.HadoopTables.load(HadoopTables.java:82)
   	at org.apache.iceberg.spark.SparkCatalog.load(SparkCatalog.java:454)
   	at org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:116)
   	at org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:79)
   	at org.apache.spark.sql.DataFrameReader.$anonfun$load$1(DataFrameReader.scala:271)
   	at scala.Option.map(Option.scala:230)
   	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:248)
   	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:232)
   	at org.apache.iceberg.spark.actions.BaseSparkAction.loadMetadataTable(BaseSparkAction.java:205)
   	at org.apache.iceberg.spark.actions.BaseSparkAction.buildValidDataFileDF(BaseSparkAction.java:156)
   	at org.apache.iceberg.spark.actions.BaseExpireSnapshotsSparkAction.buildValidFileDF(BaseExpireSnapshotsSparkAction.java:203)
   	at org.apache.iceberg.spark.actions.BaseExpireSnapshotsSparkAction.expire(BaseExpireSnapshotsSparkAction.java:154)
   	at org.apache.iceberg.spark.actions.BaseExpireSnapshotsSparkAction.doExecute(BaseExpireSnapshotsSparkAction.java:193)
   	at org.apache.iceberg.spark.actions.BaseSparkAction.withJobGroupInfo(BaseSparkAction.java:102)
   	at org.apache.iceberg.spark.actions.BaseExpireSnapshotsSparkAction.execute(BaseExpireSnapshotsSparkAction.java:185)
   	at org.apache.iceberg.spark.actions.BaseExpireSnapshotsSparkAction.execute(BaseExpireSnapshotsSparkAction.java:65)
   	at org.apache.iceberg.spark.procedures.ExpireSnapshotsProcedure.lambda$call$0(ExpireSnapshotsProcedure.java:94)
   	at org.apache.iceberg.spark.procedures.BaseProcedure.execute(BaseProcedure.java:85)
   	at org.apache.iceberg.spark.procedures.BaseProcedure.modifyIcebergTable(BaseProcedure.java:74)
   	at org.apache.iceberg.spark.procedures.ExpireSnapshotsProcedure.call(ExpireSnapshotsProcedure.java:83)
   	at org.apache.spark.sql.execution.datasources.v2.CallExec.run(CallExec.scala:33)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:39)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:39)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:45)
   	at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
   	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
   	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
   	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
   	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
   	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
   	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
   	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
   	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:607)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
   	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:602)
   	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:377)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:496)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:490)
   	at scala.collection.Iterator.foreach(Iterator.scala:941)
   	at scala.collection.Iterator.foreach$(Iterator.scala:941)
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
   	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
   	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:490)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:282)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
   	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
   	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
   	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
   	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
   	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
   	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
   	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   Caused by: java.io.IOException: No FileSystem for scheme: oss
   	at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
   	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
   	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
   	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
   	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
   	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
   	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
   	at org.apache.iceberg.hadoop.Util.getFs(Util.java:48)
   	... 70 more
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org