You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2021/09/16 13:42:18 UTC

[GitHub] [iceberg] bvinayakumar opened a new issue #3131: Unable to create iceberg database / table using spark sql with AWS S3 + Glue integration

bvinayakumar opened a new issue #3131:
URL: https://github.com/apache/iceberg/issues/3131


   Unable to create iceberg database using spark sql by following below steps. 
   
   1. Download and extract spark package (e.g. `spark-3.1.2-bin-hadoop3.2.tgz`).
   
   2. Enable access to AWS resources (S3, DynamoDB and Glue) that may be used by Iceberg
   ```
   export AWS_ACCESS_KEY_ID=...
   export AWS_SECRET_ACCESS_KEY=...
   ```
   
   3. Run shell script mentioned at https://iceberg.apache.org/aws/#spark
   
   ```
   $ cd spark-3.1.2-bin-hadoop3.2
   
   $ cat spark-iceberg-script.sh
   # add Iceberg dependency
   ICEBERG_VERSION=0.12.0
   DEPENDENCIES="org.apache.iceberg:iceberg-spark3-runtime:$ICEBERG_VERSION"
   
   # add AWS dependnecy
   AWS_SDK_VERSION=2.15.40
   AWS_MAVEN_GROUP=software.amazon.awssdk
   AWS_PACKAGES=(
       "bundle"
       "url-connection-client"
   )
   for pkg in "${AWS_PACKAGES[@]}"; do
       DEPENDENCIES+=",$AWS_MAVEN_GROUP:$pkg:$AWS_SDK_VERSION"
   done
   
   # start Spark SQL client shell
   spark-sql --packages $DEPENDENCIES \
       --conf spark.sql.catalog.my_catalog=org.apache.iceberg.spark.SparkCatalog \
       --conf spark.sql.catalog.my_catalog.warehouse=s3://iceberg-poc-bucket \
       --conf spark.sql.catalog.my_catalog.catalog-impl=org.apache.iceberg.aws.glue.GlueCatalog \
       --conf spark.sql.catalog.my_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO \
       --conf spark.sql.catalog.my_catalog.lock-impl=org.apache.iceberg.aws.glue.DynamoLockManager \
       --conf spark.sql.catalog.my_catalog.lock.table=myGlueLockTable
   ```
   
   ```
   $ ./spark-iceberg-script.sh
   :: loading settings :: url = jar:file:/opt/spark-3.1.2-bin-hadoop3.2/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
   Ivy Default Cache set to: /home/centos/.ivy2/cache
   The jars for the packages stored in: /home/centos/.ivy2/jars
   org.apache.iceberg#iceberg-spark3-runtime added as a dependency
   software.amazon.awssdk#bundle added as a dependency
   software.amazon.awssdk#url-connection-client added as a dependency
   :: resolving dependencies :: org.apache.spark#spark-submit-parent-123cd3e5-7348-4414-8646-0e1a577a625a;1.0
           confs: [default]
           found org.apache.iceberg#iceberg-spark3-runtime;0.12.0 in central
           found software.amazon.awssdk#bundle;2.15.40 in central
           found software.amazon.eventstream#eventstream;1.0.1 in central
           found software.amazon.awssdk#url-connection-client;2.15.40 in central
           found software.amazon.awssdk#utils;2.15.40 in central
           found org.reactivestreams#reactive-streams;1.0.2 in spark-list
           found software.amazon.awssdk#annotations;2.15.40 in central
           found org.slf4j#slf4j-api;1.7.28 in central
           found software.amazon.awssdk#http-client-spi;2.15.40 in central
           found software.amazon.awssdk#metrics-spi;2.15.40 in central
   :: resolution report :: resolve 531ms :: artifacts dl 16ms
           :: modules in use:
           org.apache.iceberg#iceberg-spark3-runtime;0.12.0 from central in [default]
           org.reactivestreams#reactive-streams;1.0.2 from spark-list in [default]
           org.slf4j#slf4j-api;1.7.28 from central in [default]
           software.amazon.awssdk#annotations;2.15.40 from central in [default]
           software.amazon.awssdk#bundle;2.15.40 from central in [default]
           software.amazon.awssdk#http-client-spi;2.15.40 from central in [default]
           software.amazon.awssdk#metrics-spi;2.15.40 from central in [default]
           software.amazon.awssdk#url-connection-client;2.15.40 from central in [default]
           software.amazon.awssdk#utils;2.15.40 from central in [default]
           software.amazon.eventstream#eventstream;1.0.1 from central in [default]
           ---------------------------------------------------------------------
           |                  |            modules            ||   artifacts   |
           |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
           ---------------------------------------------------------------------
           |      default     |   10  |   0   |   0   |   0   ||   10  |   0   |
           ---------------------------------------------------------------------
   :: retrieving :: org.apache.spark#spark-submit-parent-123cd3e5-7348-4414-8646-0e1a577a625a
           confs: [default]
           0 artifacts copied, 10 already retrieved (0kB/14ms)
   21/09/16 13:30:56 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
   Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
   Setting default log level to "WARN".
   To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
   21/09/16 13:31:02 WARN HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
   21/09/16 13:31:02 WARN HiveConf: HiveConf of name hive.stats.retries.wait does not exist
   21/09/16 13:31:06 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.3.0
   21/09/16 13:31:06 WARN ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.3.0, comment = Set by MetaStore centos@10.0.5.196
   Spark master: local[*], Application Id: local-1631799058809
   
   spark-sql> SHOW DATABASES;
   default
   Time taken: 3.475 seconds, Fetched 1 row(s)
   
   spark-sql> CREATE DATABASE ts;
   21/09/16 13:31:23 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
   21/09/16 13:31:23 WARN ObjectStore: Failed to get database ts, returning NoSuchObjectException
   Time taken: 0.367 seconds
   
   spark-sql> SHOW DATABASES;
   default
   ts
   Time taken: 0.063 seconds, Fetched 2 row(s)
   
   spark-sql> CREATE TABLE ts.sample (
            >     id bigint,
            >     data string,
            >     category string)
            > USING iceberg
            > PARTITIONED BY (category);
   21/09/16 13:32:18 WARN HiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider iceberg. Persisting data source table `ts`.`sample` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
   21/09/16 13:32:18 WARN SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
   21/09/16 13:32:18 WARN HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
   21/09/16 13:32:18 WARN HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
   21/09/16 13:32:18 WARN HiveConf: HiveConf of name hive.stats.retries.wait does not exist
   Time taken: 1.254 seconds
   
   spark-sql> SHOW TABLES;
   Time taken: 0.094 seconds
   ```
   
   Any suggestion to resolve below warnings?
   
   `21/09/16 13:31:23 WARN ObjectStore: Failed to get database ts, returning NoSuchObjectException`
   ...
   `21/09/16 13:32:18 WARN HiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider iceberg. Persisting data source table `ts`.`sample` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
   `
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] jackye1995 commented on issue #3131: Unable to create iceberg database / table using spark sql with AWS S3 + Glue integration

Posted by GitBox <gi...@apache.org>.
jackye1995 commented on issue #3131:
URL: https://github.com/apache/iceberg/issues/3131#issuecomment-923113121


   Looks like you dowloaded the bin and configured it to execute against a cluster. You need to make sure you have the Iceberg related jars in all the spark nodes, not just in the Spark SQL client.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] jackye1995 commented on issue #3131: Unable to create iceberg database / table using spark sql with AWS S3 + Glue integration

Posted by GitBox <gi...@apache.org>.
jackye1995 commented on issue #3131:
URL: https://github.com/apache/iceberg/issues/3131#issuecomment-999005484


   > The security token included in the request is invalid
   
   This does not seem Iceberg related, looks like you have a misconfiguration of your AWS credentials, e.g. you did not set your session token.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] qq240035000 commented on issue #3131: Unable to create iceberg database / table using spark sql with AWS S3 + Glue integration

Posted by GitBox <gi...@apache.org>.
qq240035000 commented on issue #3131:
URL: https://github.com/apache/iceberg/issues/3131#issuecomment-997975644


   I do the same steps, but my error is "software.amazon.awssdk.services.dynamodb.model.DynamoDbException: The security token included in the request is invalid", can you help  @jackye1995 
   
   software.amazon.awssdk.services.dynamodb.model.DynamoDbException: The security token included in the request is invalid. (Service: DynamoDb, Status Code: 400, Request ID: OGG5KGTLEDSJVMMC3JUVV5514VVV4KQNSO5AEMVJF66Q9ASUAAJG, Extended Request ID: null)
           at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleErrorResponse(CombinedResponseHandler.java:123)
           at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleResponse(CombinedResponseHandler.java:79)
           at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:59)
           at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:40)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:40)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:30)
           at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:73)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:42)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:77)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:39)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:50)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:36)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:64)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:34)
           at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
           at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:56)
           at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:36)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:48)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:31)
           at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
           at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
           at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
           at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:193)
           at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:133)
           at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:159)
           at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:112)
           at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:167)
           at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:94)
           at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
           at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:55)
           at software.amazon.awssdk.services.dynamodb.DefaultDynamoDbClient.describeTable(DefaultDynamoDbClient.java:2164)
           at org.apache.iceberg.aws.glue.DynamoLockManager.tableExists(DynamoLockManager.java:145)
           at org.apache.iceberg.aws.glue.DynamoLockManager.ensureLockTableExistsOrCreate(DynamoLockManager.java:123)
           at org.apache.iceberg.aws.glue.DynamoLockManager.initialize(DynamoLockManager.java:175)
           at org.apache.iceberg.aws.glue.LockManagers.loadLockManager(LockManagers.java:75)
           at org.apache.iceberg.aws.glue.LockManagers.from(LockManagers.java:52)
           at org.apache.iceberg.aws.glue.GlueCatalog.initialize(GlueCatalog.java:99)
           at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:193)
           at org.apache.iceberg.CatalogUtil.buildIcebergCatalog(CatalogUtil.java:225)
           at org.apache.iceberg.spark.SparkCatalog.buildIcebergCatalog(SparkCatalog.java:105)
           at org.apache.iceberg.spark.SparkCatalog.initialize(SparkCatalog.java:388)
           at org.apache.spark.sql.connector.catalog.Catalogs$.load(Catalogs.scala:61)
           at org.apache.spark.sql.connector.catalog.CatalogManager.$anonfun$catalog$1(CatalogManager.scala:52)
           at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:86)
           at org.apache.spark.sql.connector.catalog.CatalogManager.catalog(CatalogManager.scala:52)
           at org.apache.spark.sql.connector.catalog.LookupCatalog$CatalogAndNamespace$.unapply(LookupCatalog.scala:92)
           at org.apache.spark.sql.catalyst.analysis.ResolveCatalogs$$anonfun$apply$1.applyOrElse(ResolveCatalogs.scala:201)
           at org.apache.spark.sql.catalyst.analysis.ResolveCatalogs$$anonfun$apply$1.applyOrElse(ResolveCatalogs.scala:34)
           at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$2(AnalysisHelper.scala:108)
           at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74)
           at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$1(AnalysisHelper.scala:108)
           at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:221)
           at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown(AnalysisHelper.scala:106)
           at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown$(AnalysisHelper.scala:104)
           at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
           at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators(AnalysisHelper.scala:73)
           at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators$(AnalysisHelper.scala:72)
           at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)
           at org.apache.spark.sql.catalyst.analysis.ResolveCatalogs.apply(ResolveCatalogs.scala:34)
           at org.apache.spark.sql.catalyst.analysis.ResolveCatalogs.apply(ResolveCatalogs.scala:29)
           at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:216)
           at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
           at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
           at scala.collection.immutable.List.foldLeft(List.scala:89)
           at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:213)
           at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:205)
           at scala.collection.immutable.List.foreach(List.scala:392)
           at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:205)
           at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:196)
           at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:190)
           at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:155)
           at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:183)
           at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88)
           at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:183)
           at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:174)
           at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:228)
           at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:173)
           at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:73)
           at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
           at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:143)
           at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
           at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:143)
           at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:73)
           at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:71)
           at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:63)
           at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:98)
           at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
           at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
           at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:618)
           at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
           at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:613)
           at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
           at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:67)
           at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:381)
           at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:500)
           at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:494)
           at scala.collection.Iterator.foreach(Iterator.scala:941)
           at scala.collection.Iterator.foreach$(Iterator.scala:941)
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
           at scala.collection.IterableLike.foreach(IterableLike.scala:74)
           at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
           at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
           at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:494)
           at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:284)
           at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
           at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
           at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:951)
           at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
           at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
           at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1039)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1048)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org