You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "LuciferYang (via GitHub)" <gi...@apache.org> on 2023/05/23 09:01:53 UTC

[GitHub] [spark] LuciferYang commented on pull request #40848: [SPARK-43186][SQL][HIVE] Remove workaround for FileSinkDesc

LuciferYang commented on PR #40848:
URL: https://github.com/apache/spark/pull/40848#issuecomment-1558856186

   @pan3793 I found an interesting thing, after this one merged, when I run the following commands:
   
   ```
   build/mvn clean install -DskipTests
   build/mvn test -pl connector/connect/client/jvm -Dtest=none -DwildcardSuites=org.apache.spark.sql.ClientE2ETestSuite
   ```
   
   there are 4 test failed as following:
   
   ```
   - read and write *** FAILED ***
     io.grpc.StatusRuntimeException: INTERNAL: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.hive.execution.HiveFileFormat could not be instantiated
     at io.grpc.Status.asRuntimeException(Status.java:535)
     at io.grpc.stub.ClientCalls$BlockingResponseStream.hasNext(ClientCalls.java:660)
     at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:45)
     at scala.collection.Iterator.foreach(Iterator.scala:943)
     at scala.collection.Iterator.foreach$(Iterator.scala:943)
     at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
     at org.apache.spark.sql.SparkSession.execute(SparkSession.scala:458)
     at org.apache.spark.sql.DataFrameWriter.executeWriteOperation(DataFrameWriter.scala:257)
     at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:221)
     at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:210)
     ...
   - textFile *** FAILED ***
     io.grpc.StatusRuntimeException: INTERNAL: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.hive.execution.HiveFileFormat could not be instantiated
     at io.grpc.Status.asRuntimeException(Status.java:535)
     at io.grpc.stub.ClientCalls$BlockingResponseStream.hasNext(ClientCalls.java:660)
     at org.apache.spark.sql.connect.client.SparkResult.org$apache$spark$sql$connect$client$SparkResult$$processResponses(SparkResult.scala:62)
     at org.apache.spark.sql.connect.client.SparkResult.length(SparkResult.scala:114)
     at org.apache.spark.sql.connect.client.SparkResult.toArray(SparkResult.scala:131)
     at org.apache.spark.sql.Dataset.$anonfun$collect$1(Dataset.scala:2688)
     at org.apache.spark.sql.Dataset.withResult(Dataset.scala:3128)
     at org.apache.spark.sql.Dataset.collect(Dataset.scala:2687)
     at org.apache.spark.sql.ClientE2ETestSuite.$anonfun$new$12(ClientE2ETestSuite.scala:169)
     at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
     ...
   - write table *** FAILED ***
     io.grpc.StatusRuntimeException: INTERNAL: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.hive.execution.HiveFileFormat could not be instantiated
     at io.grpc.Status.asRuntimeException(Status.java:535)
     at io.grpc.stub.ClientCalls$BlockingResponseStream.hasNext(ClientCalls.java:660)
     at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:45)
     at scala.collection.Iterator.toStream(Iterator.scala:1417)
     at scala.collection.Iterator.toStream$(Iterator.scala:1416)
     at scala.collection.AbstractIterator.toStream(Iterator.scala:1431)
     at scala.collection.TraversableOnce.toSeq(TraversableOnce.scala:354)
     at scala.collection.TraversableOnce.toSeq$(TraversableOnce.scala:354)
     at scala.collection.AbstractIterator.toSeq(Iterator.scala:1431)
     at org.apache.spark.sql.SparkSession.execute(SparkSession.scala:471)
     ...
   - write without table or path *** FAILED ***
     io.grpc.StatusRuntimeException: INTERNAL: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.hive.execution.HiveFileFormat could not be instantiated
     at io.grpc.Status.asRuntimeException(Status.java:535)
     at io.grpc.stub.ClientCalls$BlockingResponseStream.hasNext(ClientCalls.java:660)
     at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:45)
     at scala.collection.Iterator.toStream(Iterator.scala:1417)
     at scala.collection.Iterator.toStream$(Iterator.scala:1416)
     at scala.collection.AbstractIterator.toStream(Iterator.scala:1431)
     at scala.collection.TraversableOnce.toSeq(TraversableOnce.scala:354)
     at scala.collection.TraversableOnce.toSeq$(TraversableOnce.scala:354)
     at scala.collection.AbstractIterator.toSeq(Iterator.scala:1431)
     at org.apache.spark.sql.SparkSession.execute(SparkSession.scala:471)
     ...
   ```
   
   but  if I revert this one, the the failure will disappear. `CatalogSuite` and `StreamingQuerySuite` also has some problems.Already created [SPARK-43647](https://issues.apache.org/jira/browse/SPARK-43647) to tracking this, Do you have time to investigate togethe?
   
    


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org