You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2021/07/12 19:51:17 UTC

[GitHub] [iceberg] openinx opened a new issue #2809: AWS EMR 6.3.0 cannot access apache iceberg table

openinx opened a new issue #2809:
URL: https://github.com/apache/iceberg/issues/2809


   ### My Test Environment 
   
   * AWS EMR : [6.3.0 ](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-release-6x.html#emr-630-release)
   
   ### How to reproduce
   
   Step.1 Start the spark-sql client
   
   ```bash
   spark-sql --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \
       --conf spark.sql.catalog.hive_prod=org.apache.iceberg.spark.SparkCatalog \
       --conf spark.sql.catalog.hive_prod.type=hive \
       --conf spark.sql.catalog.hive_prod.uri=thrift://ip-172-31-34-107.ap-northeast-1.compute.internal:9083 \
       --conf spark.sql.catalog.hive_prod.warehouse=s3://dw-ali/warehouse
   ```
   
   Step.2  Execute the command
   
   ```sql
   CREATE DATABASE hive_prod.iceberg_db;
   ```
   
   Then it will throw an exception: 
   
   ```text
   spark-sql> create database hive_prod.iceberg_db;
   21/07/12 10:32:07 ERROR SparkSQLDriver: Failed in [create database hive_prod.iceberg_db]
   java.lang.NoSuchMethodError: org.apache.spark.sql.internal.VariableSubstitution.<init>(Lorg/apache/spark/sql/internal/SQLConf;)V
   	at org.apache.spark.sql.catalyst.parser.extensions.IcebergSparkSqlExtensionsParser.substitutor$lzycompute(IcebergSparkSqlExtensionsParser.scala:39)
   	at org.apache.spark.sql.catalyst.parser.extensions.IcebergSparkSqlExtensionsParser.substitutor(IcebergSparkSqlExtensionsParser.scala:39)
   	at org.apache.spark.sql.catalyst.parser.extensions.IcebergSparkSqlExtensionsParser.parsePlan(IcebergSparkSqlExtensionsParser.scala:96)
   	at org.apache.spark.sql.SparkSession.$anonfun$sql$2(SparkSession.scala:613)
   	at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:192)
   	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:613)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
   	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:610)
   	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:67)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:381)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:500)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:494)
   	at scala.collection.Iterator.foreach(Iterator.scala:941)
   	at scala.collection.Iterator.foreach$(Iterator.scala:941)
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
   	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
   	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:494)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:284)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
   	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:959)
   	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
   	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
   	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
   	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1038)
   	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1047)
   	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   java.lang.NoSuchMethodError: org.apache.spark.sql.internal.VariableSubstitution.<init>(Lorg/apache/spark/sql/internal/SQLConf;)V
   	at org.apache.spark.sql.catalyst.parser.extensions.IcebergSparkSqlExtensionsParser.substitutor$lzycompute(IcebergSparkSqlExtensionsParser.scala:39)
   	at org.apache.spark.sql.catalyst.parser.extensions.IcebergSparkSqlExtensionsParser.substitutor(IcebergSparkSqlExtensionsParser.scala:39)
   	at org.apache.spark.sql.catalyst.parser.extensions.IcebergSparkSqlExtensionsParser.parsePlan(IcebergSparkSqlExtensionsParser.scala:96)
   	at org.apache.spark.sql.SparkSession.$anonfun$sql$2(SparkSession.scala:613)
   	at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:192)
   	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:613)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
   	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:610)
   	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:67)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:381)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:500)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:494)
   	at scala.collection.Iterator.foreach(Iterator.scala:941)
   	at scala.collection.Iterator.foreach$(Iterator.scala:941)
   	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
   	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
   	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
   	at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:494)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:284)
   	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
   	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:959)
   	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
   	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
   	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
   	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1038)
   	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1047)
   	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   ```
   
   
   I checked all the iceberg runtime jars: 
   
   ```bash
   [hadoop@ip-172-31-34-107 ~]$ locate iceberg
   /emr/instance-controller/lib/bootstrap-actions/1/bootstrap-with-iceberg.sh
   /usr/share/aws/aws-java-sdk/iceberg-flink-runtime-0.11.1.jar
   /usr/share/aws/aws-java-sdk/iceberg-spark3-runtime-0.11.1.jar
   ```
   
   And found that the aws EMR 6.3.0 is still using the `iceberg-spark3-runtime-0.11.1.jar` by default,  while this iceberg runtime jar could not be loaded by  Spark 3.1.1 ( we could see the PR : https://github.com/apache/iceberg/pull/2512 )
   
   
   FYI @jackye1995 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] openinx commented on issue #2809: AWS EMR 6.3.0 cannot access apache iceberg table

Posted by GitBox <gi...@apache.org>.
openinx commented on issue #2809:
URL: https://github.com/apache/iceberg/issues/2809#issuecomment-878821021


   Yes, once we've released the apache iceberg 0.12.0,  the aws EMR 6.3 will also need to upgrade the apache iceberg version from 0.11.1 to 0.12.0.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] jackye1995 commented on issue #2809: AWS EMR 6.3.0 cannot access apache iceberg table

Posted by GitBox <gi...@apache.org>.
jackye1995 commented on issue #2809:
URL: https://github.com/apache/iceberg/issues/2809#issuecomment-878758952


   Yeah EMR 6.3 is using Spark 3.1, so this is expected, and the PR you linked can fix it. I suppose we need to wait until the next 0.12.0 release for this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org