You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2020/04/29 16:39:55 UTC

[GitHub] [incubator-hudi] dh376 opened a new issue #1571: [SUPPORT] Hudi IllegalArgumentException Wrong FS

dh376 opened a new issue #1571:
URL: https://github.com/apache/incubator-hudi/issues/1571


   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://cwiki.apache.org/confluence/display/HUDI/FAQ)?
   
   - Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apache.org.
   
   - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   I'm getting IllegalArgumentException: Wrong FS error, and I don't know why.
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   0. Spin up EMR (emr-5.29.0, Spark 2.4.4, Ganglia 3.7.2, Zeppelin 0.8.2, JupyterHub 1.0.0, Hive 2.3.6, Presto 0.227, Tez 0.9.2, Hadoop distribution:Amazon 2.8.5)
   1. Start Spark Shell env with
   ```
   spark-shell --conf "spark.serializer=org.apache.spark.serializer.KryoSerializer" --conf "spark.sql.hive.convertMetastoreParquet=false" --jars /usr/lib/hudi/hudi-spark-bundle.jar,/usr/lib/spark/external/lib/spark-avro.jar
   ```
   2. Basic Upsert which creates Hudi table
   ```
   scala> val hudiOptions = Map[String,String](
        |   HoodieWriteConfig.TABLE_NAME -> "my_hudi_table",
        |   DataSourceWriteOptions.HIVE_TABLE_OPT_KEY -> "my_hudi_table",
        |   DataSourceWriteOptions.STORAGE_TYPE_OPT_KEY -> "COPY_ON_WRITE", 
        |   DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY -> "uid",
        |   DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY ->"publisher",
        |   DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY -> "acquired_at",
        |   DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY -> "true",
        |   DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY -> "publisher",
        |   DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY -> classOf[MultiPartKeysValueExtractor].getName
        | )
   
   df.write
     .format("org.apache.hudi")
     .option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.UPSERT_OPERATION_OPT_VAL)
     .options(hudiOptions)
     .mode(SaveMode.Append)
     .save("s3://qadv2p0/exp/")
   ```
   3. End the spark shell session
   4. Start new spark shell session
   5. Do upsert
   ```
   val hudiOptions = Map[String,String](
        |   HoodieWriteConfig.TABLE_NAME -> "my_hudi_table",
        |   DataSourceWriteOptions.HIVE_TABLE_OPT_KEY -> "my_hudi_table",
        |   DataSourceWriteOptions.STORAGE_TYPE_OPT_KEY -> "COPY_ON_WRITE", 
        |   DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY -> "uid",
        |   DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY ->"publisher",
        |   DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY -> "acquired_at",
        |   DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY -> "true",
        |   DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY -> "publisher",
        |   DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY -> classOf[MultiPartKeysValueExtractor].getName
        | )
   
   toDeleteDf.write
     .format("org.apache.hudi")
     .option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.UPSERT_OPERATION_OPT_VAL)
     .option(DataSourceWriteOptions.PAYLOAD_CLASS_OPT_KEY, "org.apache.hudi.EmptyHoodieRecordPayload")
     .options(hudiOptions)
     .mode(SaveMode.Append)
     .save("s3://qadv2p0/exp/")
   ```
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   I got error:
   ```
   ip-10-0-129-85.ec2.internal, executor 23): java.lang.IllegalArgumentException: Wrong FS: s3://facebook.com/inspirationhut, expected: s3://qadv2p0
   	at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:669)
   	at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:487)
   	at com.amazon.ws.emr.hadoop.fs.staging.DefaultStagingMechanism.isStagingDirectoryPath(DefaultStagingMechanism.java:38)
   	at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:842)
   	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1440)
   	at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.exists(EmrFileSystem.java:352)
   	at org.apache.hudi.common.io.storage.HoodieWrapperFileSystem.exists(HoodieWrapperFileSystem.java:459)
   	at org.apache.hudi.common.util.FSUtils.createPathIfNotExists(FSUtils.java:517)
   	at org.apache.hudi.common.table.view.AbstractTableFileSystemView.lambda$ensurePartitionLoadedCorrectly$5(AbstractTableFileSystemView.java:221)
   	at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
   	at org.apache.hudi.common.table.view.AbstractTableFileSystemView.ensurePartitionLoadedCorrectly(AbstractTableFileSystemView.java:212)
   	at org.apache.hudi.common.table.view.AbstractTableFileSystemView.getLatestDataFilesBeforeOrOn(AbstractTableFileSystemView.java:351)
   	at org.apache.hudi.index.bloom.HoodieBloomIndex.lambda$loadInvolvedFiles$19c2c1bb$1(HoodieBloomIndex.java:247)
   	at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:125)
   	at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:125)
   	at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
   	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
   	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
   	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
   	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
   	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
   	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
   	at scala.collection.AbstractIterator.to(Iterator.scala:1334)
   	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
   	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1334)
   	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
   	at scala.collection.AbstractIterator.toArray(Iterator.scala:1334)
   	at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:945)
   	at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:945)
   	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
   	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
   	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
   	at org.apache.spark.scheduler.Task.run(Task.scala:123)
   	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
   	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
   	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   	at java.lang.Thread.run(Thread.java:748)
   ```
   
   **Environment Description**
   
   * Hudi version :
   The version of Hudi installed with Amazon EMR 5.29.0 is 0.5.0-incubating.
   
   * Spark version :
   2.4.4,
   * Hive version :
   2.3.6
   * Hadoop version :
   Hadoop distribution:Amazon 2.8.5
   * Storage (HDFS/S3/GCS..) :
   s3
   * Running on Docker? (yes/no) :
   no
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-hudi] bvaradar commented on issue #1571: [SUPPORT] Hudi IllegalArgumentException Wrong FS

Posted by GitBox <gi...@apache.org>.
bvaradar commented on issue #1571:
URL: https://github.com/apache/incubator-hudi/issues/1571#issuecomment-622596113


   I will look into this tonight


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-hudi] bvaradar commented on issue #1571: [SUPPORT] Hudi IllegalArgumentException Wrong FS

Posted by GitBox <gi...@apache.org>.
bvaradar commented on issue #1571:
URL: https://github.com/apache/incubator-hudi/issues/1571#issuecomment-622723355


   @dh376 : I couldnt tell how the path changed with the description. Can you turn on debug log level in spark shell and attach the entire output.
   
   BTW, it looks like you are trying to delete in the second write, you might need to set option(OPERATION_OPT_KEY,"delete") instead of upsert. 
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-hudi] nsivabalan commented on issue #1571: [SUPPORT] Hudi IllegalArgumentException Wrong FS

Posted by GitBox <gi...@apache.org>.
nsivabalan commented on issue #1571:
URL: https://github.com/apache/incubator-hudi/issues/1571#issuecomment-626101074


   @dh376 : let us know if the issue is resolved. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-hudi] bvaradar commented on issue #1571: [SUPPORT] Hudi IllegalArgumentException Wrong FS

Posted by GitBox <gi...@apache.org>.
bvaradar commented on issue #1571:
URL: https://github.com/apache/incubator-hudi/issues/1571#issuecomment-623936797


   @dh376 : If this is no longer the issue, please close this issue.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org