You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "jiangjiguang0719 (Jira)" <ji...@apache.org> on 2023/04/25 09:21:00 UTC

[jira] [Updated] (SPARK-43278) Exception in thread "main" java.lang.NoSuchMethodError: java.nio.ByteBuffer.flip()Ljava/nio/ByteBuffer;

     [ https://issues.apache.org/jira/browse/SPARK-43278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

jiangjiguang0719 updated SPARK-43278:
-------------------------------------
    Description: 
Java version: 1.8.0_331, Apache Maven 3.8.4

I run next steps:
 # git clone [https://github.com/apache/spark.git]
 # git checkout -b v3.3.0 3.3.0
 #  mvn clean install -DskipTests
 # copy hive-site.xml to examples/src/main/resources/
 # execute TPC-H Q6 

 
{code:java}
public static void main(String[] args) throws InterruptedException {
        SparkConf sparkConf = new SparkConf()
                .setAppName("demo")
                .setMaster("local[1]")
                ;        
         SparkSession sparkSession = SparkSession.builder()
                .config(sparkConf)
                .enableHiveSupport()
                .getOrCreate();
        sparkSession.sql("use local_tpch_sf10_uncompressed_etl");        
        sparkSession.sql(TPCH.SQL6).show();
} {code}
 

 

get the error info:

Exception in thread "main" java.lang.NoSuchMethodError: java.nio.ByteBuffer.flip()Ljava/nio/ByteBuffer;
    at org.apache.spark.util.io.ChunkedByteBufferOutputStream.toChunkedByteBuffer(ChunkedByteBufferOutputStream.scala:115)
    at org.apache.spark.broadcast.TorrentBroadcast$.blockifyObject(TorrentBroadcast.scala:325)
    at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:140)
    at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:95)
    at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
    at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:75)
    at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1529)
    at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.buildReaderWithPartitionValues(ParquetFileFormat.scala:235)
    at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD$lzycompute(DataSourceScanExec.scala:457)
    at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD(DataSourceScanExec.scala:448)
    at org.apache.spark.sql.execution.FileSourceScanExec.doExecuteColumnar(DataSourceScanExec.scala:547)
    at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeColumnar$1(SparkPlan.scala:221)
    at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:232)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:229)
    at org.apache.spark.sql.execution.SparkPlan.executeColumnar(SparkPlan.scala:217)

  was:
Java version: 1.8.0_331, Apache Maven 3.8.4

I run next steps:
 # git clone [https://github.com/apache/spark.git]
 # git checkout -b v3.3.0 3.3.0
 #  mvn clean install -DskipTests
 # copy hive-site.xml to examples/src/main/resources/
 # execute TPC-H Q6 

!image-2023-04-25-17-14-50-392.png|width=437,height=246!

get the error info

!image-2023-04-25-17-15-57-874.png|width=466,height=161!


> Exception in thread "main" java.lang.NoSuchMethodError: java.nio.ByteBuffer.flip()Ljava/nio/ByteBuffer;
> -------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-43278
>                 URL: https://issues.apache.org/jira/browse/SPARK-43278
>             Project: Spark
>          Issue Type: Bug
>          Components: Java API
>    Affects Versions: 3.3.0
>            Reporter: jiangjiguang0719
>            Priority: Major
>
> Java version: 1.8.0_331, Apache Maven 3.8.4
> I run next steps:
>  # git clone [https://github.com/apache/spark.git]
>  # git checkout -b v3.3.0 3.3.0
>  #  mvn clean install -DskipTests
>  # copy hive-site.xml to examples/src/main/resources/
>  # execute TPC-H Q6 
>  
> {code:java}
> public static void main(String[] args) throws InterruptedException {
>         SparkConf sparkConf = new SparkConf()
>                 .setAppName("demo")
>                 .setMaster("local[1]")
>                 ;        
>          SparkSession sparkSession = SparkSession.builder()
>                 .config(sparkConf)
>                 .enableHiveSupport()
>                 .getOrCreate();
>         sparkSession.sql("use local_tpch_sf10_uncompressed_etl");        
>         sparkSession.sql(TPCH.SQL6).show();
> } {code}
>  
>  
> get the error info:
> Exception in thread "main" java.lang.NoSuchMethodError: java.nio.ByteBuffer.flip()Ljava/nio/ByteBuffer;
>     at org.apache.spark.util.io.ChunkedByteBufferOutputStream.toChunkedByteBuffer(ChunkedByteBufferOutputStream.scala:115)
>     at org.apache.spark.broadcast.TorrentBroadcast$.blockifyObject(TorrentBroadcast.scala:325)
>     at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:140)
>     at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:95)
>     at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>     at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:75)
>     at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1529)
>     at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.buildReaderWithPartitionValues(ParquetFileFormat.scala:235)
>     at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD$lzycompute(DataSourceScanExec.scala:457)
>     at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD(DataSourceScanExec.scala:448)
>     at org.apache.spark.sql.execution.FileSourceScanExec.doExecuteColumnar(DataSourceScanExec.scala:547)
>     at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeColumnar$1(SparkPlan.scala:221)
>     at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:232)
>     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>     at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:229)
>     at org.apache.spark.sql.execution.SparkPlan.executeColumnar(SparkPlan.scala:217)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org