You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@kyuubi.apache.org by GitBox <gi...@apache.org> on 2023/01/19 14:04:11 UTC

[GitHub] [kyuubi] Swarvenstein opened a new issue, #4194: [Bug] Unable to adjust kyuubi driver pod idle timeout.

Swarvenstein opened a new issue, #4194:
URL: https://github.com/apache/kyuubi/issues/4194

   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
   
   
   ### Search before asking
   
   - [X] I have searched in the [issues](https://github.com/apache/kyuubi/issues?q=is%3Aissue) and found no similar issues.
   
   
   ### Describe the bug
   
   Hello. We are running kyuubi in k8s environment and faced an issue with adjusting driver idle timeout. By default it's going to Completed state after 50s of inactivity. We've tried to use the following parameters in kyuubi-configmap.yaml\kyuubi-defaults.conf:
   ```
       kyuubi.backend.engine.exec.pool.keepalive.time=PT10M
       kyuubi.backend.server.exec.pool.keepalive.time=PT10M
       kyuubi.batch.application.check.interval=PT10M
       kyuubi.engine.user.isolated.spark.session.idle.interval=PT10M
       kyuubi.frontend.thrift.worker.keepalive.time=PT10M
       kyuubi.ha.zookeeper.session.timeout=60000000
       kyuubi.zookeeper.embedded.min.session.timeout=600000
       kyuubi.zookeeper.embedded.max.session.timeout=6000000
       kyuubi.zookeeper.embedded.tick.time=30000
       kyuubi.session.engine.alive.timeout=PT10M
       kyuubi.session.engine.idle.timeout=PT10M
       kyuubi.session.engine.check.interval=PT10M
       kyuubi.session.engine.alive.probe.interval=PT10M
       kyuubi.session.engine.login.timeout=PT10M
       kyuubi.session.check.interval=PT10M
       kyuubi.session.idle.timeout=PT10M
       kyuubi.session.engine.startup.waitCompletion=true
   ```
   None of them (we tried one by one and all of them together) changing the driver state changing timeout.
   
   Why it's needed - driver start takes about 20-30s and we want to keep it 5-10 minutes in idle state to prevent this time overhead for kyuubi users in each separate request.
   
   I will be very grateful if you have the opportunity and help to solve this issue.
   
   ### Affects Version(s)
   
   1.6.1
   
   ### Kyuubi Server Log Output
   
   ```logtalk
   2023-01-19 13:18:47.765 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V10
   2023-01-19 13:18:47.766 INFO org.apache.kyuubi.session.KyuubiSessionManager: Opening session for anonymous@192.168.21.137
   2023-01-19 13:18:47.767 WARN org.apache.kyuubi.config.KyuubiConf: The Kyuubi config 's3kyuubi.ha.zookeeper.namespace' has been deprecated in Kyuubi v1.6.0 and may be removed in the future. Use kyuubi.ha.namespace instead
   2023-01-19 13:18:47.767 WARN org.apache.kyuubi.config.KyuubiConf: The Kyuubi config 'kyuubi.frontend.bind.port' has been deprecated in Kyuubi v1.4.0 and may be removed in the future. Use kyuubi.frontend.thrift.binary.bind.port instead
   2023-01-19 13:18:47.768 WARN org.apache.kyuubi.config.KyuubiConf: The Kyuubi config 'kyuubi.frontend.bind.port' has been deprecated in Kyuubi v1.4.0 and may be removed in the future. Use kyuubi.frontend.thrift.binary.bind.port instead
   2023-01-19 13:18:47.769 INFO org.apache.kyuubi.operation.log.OperationLog: Creating operation log file /opt/kyuubi/work/server_operation_logs/35d34b4c-e204-485d-b559-8afde030e936/02f35cb1-348b-458d-b9da-8f833b60cf21
   2023-01-19 13:18:47.770 INFO org.apache.kyuubi.session.KyuubiSessionManager: anonymous's session with SessionHandle [35d34b4c-e204-485d-b559-8afde030e936] is opened, current opening sessions 1
   2023-01-19 13:18:47.770 INFO org.apache.kyuubi.operation.LaunchEngine: Processing anonymous's query[02f35cb1-348b-458d-b9da-8f833b60cf21]: PENDING_STATE -> RUNNING_STATE, statement:
   LaunchEngine
   2023-01-19 13:18:47.772 INFO org.apache.curator.framework.imps.CuratorFrameworkImpl: Starting
   2023-01-19 13:18:47.772 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=kyuubi-zookeeper.zookeeper:2181 sessionTimeout=60000000 watcher=org.apache.curator.ConnectionState@117edab5
   2023-01-19 13:18:47.777 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server kyuubi-zookeeper.zookeeper/10.108.54.123:2181. Will not attempt to authenticate using SASL (unknown error)
   2023-01-19 13:18:47.779 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to kyuubi-zookeeper.zookeeper/10.108.54.123:2181, initiating session
   2023-01-19 13:18:47.781 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server kyuubi-zookeeper.zookeeper/10.108.54.123:2181, sessionid = 0x300720fdb0619e3, negotiated timeout = 40000
   2023-01-19 13:18:47.782 INFO org.apache.curator.framework.state.ConnectionStateManager: State change: CONNECTED
   2023-01-19 13:18:47.786 INFO org.apache.kyuubi.engine.EngineRef: Launching engine:
   /opt/spark/bin/spark-submit \
       --class org.apache.kyuubi.engine.spark.SparkSQLEngine \
       --conf spark.kyuubi.frontend.connection.url.use.hostname=false \
       --conf spark.dynamicAllocation.shuffleTracking.enabled=true \
       --conf spark.sql.warehouse.dir=s3a://<bucket_name>/ \
       --conf spark.kyuubi.frontend.thrift.worker.keepalive.time=PT10M \
       --conf spark.kyuubi.session.engine.idle.timeout=PT10M \
       --conf spark.hadoop.fs.s3a.connection.ssl.enabled=true \
       --conf spark.kyuubi.session.engine.check.interval=PT10M \
       --conf spark.kubernetes.namespace=kyuubi-jobs \
       --conf spark.eventLog.enabled=true \
       --conf spark.kubernetes.driver.podTemplateFile=/opt/kyuubi/conf/pod_template.yml \
       --conf spark.sql.adaptive.enabled=true \
       --conf spark.hive.server2.thrift.resultset.default.fetch.size=1000 \
       --conf spark.kyuubi.session.check.interval=PT10M \
       --conf spark.kyuubi.backend.server.exec.pool.keepalive.time=PT10M \
       --conf spark.kyuubi.session.idle.timeout=PT10M \
       --conf spark.kyuubi.engine.user.isolated.spark.session.idle.interval=PT10M \
       --conf spark.eventLog.rotation.minFileSize=100m \
       --conf spark.kyuubi.session.engine.startup.waitCompletion=true \
       --conf spark.kyuubi.engine.submit.time=1674134327784 \
       --conf spark.kubernetes.driver.label.kyuubi-unique-tag=35d34b4c-e204-485d-b559-8afde030e936 \
       --conf spark.hadoop.fs.s3a.downgrade.syncable.exceptions=true \
       --conf spark.app.name=kyuubi_CONNECTION_SPARK_SQL_anonymous_35d34b4c-e204-485d-b559-8afde030e936 \
       --conf spark.kubernetes.driver.podNamePrefix=kyuubi-anonymous-driver \
       --conf spark.eventLog.rotation.maxFilesToRetain=2 \
       --conf spark.kyuubi.ha.addresses=kyuubi-zookeeper.zookeeper:2181 \
       --conf spark.kyuubi.zookeeper.embedded.tick.time=30000 \
       --conf spark.driver.memory=2g \
       --conf spark.kyuubi.metrics.reporters=PROMETHEUS \
       --conf spark.kyuubi.ha.zookeeper.session.timeout=60000000 \
       --conf spark.eventLog.rotation.enabled=true \
       --conf spark.kubernetes.authenticate.serviceAccountName=de-kyuubi \
       --conf spark.kyuubi.batch.application.check.interval=PT10M \
       --conf spark.sql.parquet.mergeSchema=true \
       --conf spark.kyuubi.backend.engine.exec.pool.keepalive.time=PT10M \
       --conf spark.kyuubi.ha.engine.ref.id=35d34b4c-e204-485d-b559-8afde030e936 \
       --conf spark.dynamicAllocation.maxExecutors=20 \
       --conf spark.kubernetes.container.image=harbor.dwh.runit.cc/de-image-spark/spark:v3.0.65 \
       --conf spark.kubernetes.executor.podTemplateFile=/opt/kyuubi/conf/pod_template.yml \
       --conf spark.dynamicAllocation.executorAllocationRatio=1 \
       --conf spark.driver.extraJavaOptions=-Divy.cache.dir=/tmp -Divy.home=/tmp \
       --conf spark.sql.sources.partitionOverwriteMode=dynamic \
       --conf spark.submit.deployMode=cluster \
       --conf spark.hadoop.fs.s3a.change.detection.mode=warn \
       --conf spark.sql.broadcastTimeout=30000 \
       --conf spark.kyuubi.zookeeper.embedded.max.session.timeout=6000000 \
       --conf spark.master=k8s://https://kubernetes.default.svc:443 \
       --conf spark.kubernetes.authenticate.driver.serviceAccountName=de-kyuubi \
       --conf spark.kubernetes.executor.podNamePrefix=kyuubi-anonymous \
       --conf spark.kyuubi.engine.share.level=CONNECTION \
       --conf spark.kyuubi.ha.zookeeper.namespace=kyuubi-de \
       --conf spark.eventLog.dir=s3a://<logs_bucket_name>/ \
       --conf spark.kyuubi.session.engine.alive.timeout=PT10M \
       --conf spark.dynamicAllocation.enabled=true \
       --conf spark.kubernetes.authenticate.oauthTokenFile=/var/run/secrets/kubernetes.io/serviceaccount/token \
       --conf spark.sql.legacy.parquet.datetimeRebaseModeInWrite=LEGACY \
       --conf spark.kyuubi.client.ipAddress=192.168.56.1 \
       --conf spark.kyuubi.ha.enabled=true \
       --conf spark.hadoop.fs.s3a.endpoint=<s3_endpoint> \
       --conf spark.kyuubi.zookeeper.embedded.min.session.timeout=600000 \
       --conf spark.kyuubi.session.engine.alive.probe.interval=PT10M \
       --conf spark.kubernetes.authenticate.caCertFile=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
       --conf spark.dynamicAllocation.minExecutors=2 \
       --conf spark.dynamicAllocation.initialExecutors=2 \
       --conf spark.kubernetes.file.upload.path=s3a://<bucket_name> \
       --conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem \
       --conf spark.eventLog.rotation.interval=3600 \
       --conf spark.dynamicAllocation.executorIdleTimeout=600s \
       --conf spark.kyuubi.session.engine.login.timeout=PT10M \
       --conf spark.kyuubi.ha.namespace=/kyuubi-de_1.6.1-incubating_CONNECTION_SPARK_SQL/anonymous/35d34b4c-e204-485d-b559-8afde030e936 \
       --conf spark.hadoop.fs.s3a.fast.upload=true \
       --conf spark.kubernetes.driverEnv.SPARK_USER_NAME=anonymous \
       --conf spark.executorEnv.SPARK_USER_NAME=anonymous \
       --proxy-user anonymous /opt/kyuubi/externals/engines/spark/kyuubi-spark-sql-engine_2.12-1.6.1-incubating.jar
   2023-01-19 13:18:47.787 INFO org.apache.kyuubi.engine.ProcBuilder: Logging to /opt/kyuubi/work/anonymous/kyuubi-spark-sql-engine.log.5
   2023-01-19 13:19:16.833 INFO org.apache.kyuubi.ha.client.zookeeper.ZookeeperDiscoveryClient: Get service instance:10.10.46.26:37197 and version:Some(1.6.1-incubating) under /kyuubi-de_1.6.1-incubating_CONNECTION_SPARK_SQL/anonymous/35d34b4c-e204-485d-b559-8afde030e936
   2023-01-19 13:19:17.178 INFO org.apache.kyuubi.session.KyuubiSessionImpl: [anonymous:192.168.56.1] SessionHandle [35d34b4c-e204-485d-b559-8afde030e936] - Connected to engine [10.10.46.26:37197]/[spark-85247ce9c71c432290615c69a192b85e] with SessionHandle [0d8be737-c71d-4870-954b-831835390f09]]
   2023-01-19 13:19:17.178 INFO org.apache.curator.framework.imps.CuratorFrameworkImpl: backgroundOperationsLoop exiting
   2023-01-19 13:19:17.182 INFO org.apache.zookeeper.ZooKeeper: Session: 0x300720fdb0619e3 closed
   2023-01-19 13:19:17.182 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down for session: 0x300720fdb0619e3
   2023-01-19 13:19:17.182 INFO org.apache.kyuubi.operation.LaunchEngine: Processing anonymous's query[02f35cb1-348b-458d-b9da-8f833b60cf21]: RUNNING_STATE -> FINISHED_STATE, time taken: 29.411 seconds
   2023-01-19 13:19:17.238 INFO org.apache.kyuubi.operation.log.OperationLog: Creating operation log file /opt/kyuubi/work/server_operation_logs/35d34b4c-e204-485d-b559-8afde030e936/e568af82-e989-4750-94da-c2ddc6f95df9
   2023-01-19 13:19:17.238 INFO org.apache.kyuubi.session.KyuubiSessionImpl: [anonymous:192.168.56.1] SessionHandle [35d34b4c-e204-485d-b559-8afde030e936] - Starting to wait the launch engine operation finished
   2023-01-19 13:19:17.238 INFO org.apache.kyuubi.session.KyuubiSessionImpl: [anonymous:192.168.56.1] SessionHandle [35d34b4c-e204-485d-b559-8afde030e936] - Engine has been launched, elapsed time: 0 s
   2023-01-19 13:19:17.275 INFO org.apache.kyuubi.operation.ExecuteStatement: Processing anonymous's query[e568af82-e989-4750-94da-c2ddc6f95df9]: PENDING_STATE -> RUNNING_STATE, statement:
   _GET_CATALOG
   2023-01-19 13:19:17.316 INFO org.apache.kyuubi.operation.ExecuteStatement: Query[e568af82-e989-4750-94da-c2ddc6f95df9] in FINISHED_STATE
   2023-01-19 13:19:17.316 INFO org.apache.kyuubi.operation.ExecuteStatement: Processing anonymous's query[e568af82-e989-4750-94da-c2ddc6f95df9]: RUNNING_STATE -> FINISHED_STATE, time taken: 0.041 seconds
   2023-01-19 13:19:17.467 INFO org.apache.kyuubi.client.KyuubiSyncThriftClient: TCloseOperationReq(operationHandle:TOperationHandle(operationId:THandleIdentifier(guid:54 BD FF 01 92 47 49 30 99 65 C1 D9 25 33 8F 5F, secret:C2 EE 5B 97 3E A0 41 FC AC 16 9B D7 08 ED 8F 38), operationType:EXECUTE_STATEMENT, hasResultSet:true)) succeed on engine side
   2023-01-19 13:19:17.600 INFO org.apache.kyuubi.operation.GetTypeInfo: Processing anonymous's query[0757545a-952b-4fd8-beef-67a454168c7c]: INITIALIZED_STATE -> RUNNING_STATE, statement:
   GetTypeInfo
   2023-01-19 13:19:17.615 INFO org.apache.kyuubi.operation.GetTypeInfo: Processing anonymous's query[0757545a-952b-4fd8-beef-67a454168c7c]: RUNNING_STATE -> FINISHED_STATE, time taken: 0.015 seconds
   2023-01-19 13:19:17.720 INFO org.apache.kyuubi.client.KyuubiSyncThriftClient: TCloseOperationReq(operationHandle:TOperationHandle(operationId:THandleIdentifier(guid:59 42 36 BD 4D 08 42 BE 8E AF 7E 8F 5F C2 B8 31, secret:C2 EE 5B 97 3E A0 41 FC AC 16 9B D7 08 ED 8F 38), operationType:GET_TYPE_INFO, hasResultSet:true)) succeed on engine side
   2023-01-19 13:19:17.736 INFO org.apache.kyuubi.operation.GetCatalogs: Processing anonymous's query[abf2867b-40c5-4618-88d9-d3130498de34]: INITIALIZED_STATE -> RUNNING_STATE, statement:
   GetCatalogs
   2023-01-19 13:19:17.753 INFO org.apache.kyuubi.operation.GetCatalogs: Processing anonymous's query[abf2867b-40c5-4618-88d9-d3130498de34]: RUNNING_STATE -> FINISHED_STATE, time taken: 0.017 seconds
   2023-01-19 13:19:17.819 INFO org.apache.kyuubi.client.KyuubiSyncThriftClient: TCloseOperationReq(operationHandle:TOperationHandle(operationId:THandleIdentifier(guid:FC BE F7 72 34 8C 45 23 87 08 00 D8 56 BA 21 21, secret:C2 EE 5B 97 3E A0 41 FC AC 16 9B D7 08 ED 8F 38), operationType:GET_CATALOGS, hasResultSet:true)) succeed on engine side
   2023-01-19 13:20:07.821 INFO org.apache.kyuubi.server.KyuubiTBinaryFrontendService: Session [SessionHandle [35d34b4c-e204-485d-b559-8afde030e936]] disconnected without closing properly, close it now
   2023-01-19 13:20:07.821 INFO org.apache.kyuubi.session.KyuubiSessionManager: SessionHandle [35d34b4c-e204-485d-b559-8afde030e936] is closed, current opening sessions 0
   2023-01-19 13:20:07.875 INFO org.apache.kyuubi.engine.ProcBuilder: Destroy the process, since waitCompletion is false.
   ```
   
   
   ### Kyuubi Engine Log Output
   
   ```logtalk
   ++ id -u
   + myuid=185
   ++ id -g
   + mygid=0
   + set +e
   ++ getent passwd 185
   + uidentry=
   + set -e
   + '[' -z '' ']'
   + '[' -w /etc/passwd ']'
   + echo 185:x:185:0:anonymous:/opt/spark:/bin/false
   + '[' -z /usr/local/openjdk-11 ']'
   + SPARK_CLASSPATH=':/opt/spark/jars/*'
   + env
   + grep SPARK_JAVA_OPT_
   + sort -t_ -k4 -n
   + sed 's/[^=]*=\(.*\)/\1/g'
   + readarray -t SPARK_EXECUTOR_JAVA_OPTS
   + '[' -n '' ']'
   + '[' -z ']'
   + '[' -z ']'
   + '[' -n '' ']'
   + '[' -z x ']'
   + SPARK_CLASSPATH='/opt/hadoop/conf::/opt/spark/jars/*'
   + '[' -z x ']'
   + SPARK_CLASSPATH='/opt/spark/conf:/opt/hadoop/conf::/opt/spark/jars/*'
   + case "$1" in
   + shift 1
   + CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$@")
   + exec /usr/bin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=10.10.46.26 --deploy-mode client --proxy-user anonymous --properties-file /opt/spark/conf/spark.properties --class org.apache.kyuubi.engine.spark.SparkSQLEngine spark-internal
   23/01/19 13:18:58 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
   23/01/19 13:18:58 WARN MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
   23/01/19 13:19:01 INFO SignalRegister: Registering signal handler for TERM
   23/01/19 13:19:01 INFO SignalRegister: Registering signal handler for HUP
   23/01/19 13:19:01 INFO SignalRegister: Registering signal handler for INT
   23/01/19 13:19:01 INFO HiveConf: Found configuration file file:/opt/hadoop/conf/hive-site.xml
   23/01/19 13:19:01 WARN KyuubiConf: The Kyuubi config 'kyuubi.ha.zookeeper.namespace' has been deprecated in Kyuubi v1.6.0 and may be removed in the future. Use kyuubi.ha.namespace instead
   23/01/19 13:19:01 INFO SparkContext: Running Spark version 3.3.0
   23/01/19 13:19:01 INFO ResourceUtils: ==============================================================
   23/01/19 13:19:01 INFO ResourceUtils: No custom resources configured for spark.driver.
   23/01/19 13:19:01 INFO ResourceUtils: ==============================================================
   23/01/19 13:19:01 INFO SparkContext: Submitted application: kyuubi_CONNECTION_SPARK_SQL_anonymous_35d34b4c-e204-485d-b559-8afde030e936
   23/01/19 13:19:01 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
   23/01/19 13:19:01 INFO ResourceProfile: Limiting resource is cpus at 1 tasks per executor
   23/01/19 13:19:01 INFO ResourceProfileManager: Added ResourceProfile id: 0
   23/01/19 13:19:01 INFO SecurityManager: Changing view acls to: 185,anonymous
   23/01/19 13:19:01 INFO SecurityManager: Changing modify acls to: 185,anonymous
   23/01/19 13:19:01 INFO SecurityManager: Changing view acls groups to: 
   23/01/19 13:19:01 INFO SecurityManager: Changing modify acls groups to: 
   23/01/19 13:19:01 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(185, anonymous); groups with view permissions: Set(); users  with modify permissions: Set(185, anonymous); groups with modify permissions: Set()
   23/01/19 13:19:01 INFO Utils: Successfully started service 'sparkDriver' on port 7078.
   23/01/19 13:19:01 INFO SparkEnv: Registering MapOutputTracker
   23/01/19 13:19:01 INFO SparkEnv: Registering BlockManagerMaster
   23/01/19 13:19:01 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
   23/01/19 13:19:01 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
   23/01/19 13:19:01 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
   23/01/19 13:19:01 INFO DiskBlockManager: Created local directory at /var/data/spark-250920f8-2aa9-4458-99de-9aa60e944508/blockmgr-fa1c6bde-d5ca-4a17-a151-07b2f72f47d4
   23/01/19 13:19:02 INFO MemoryStore: MemoryStore started with capacity 1048.8 MiB
   23/01/19 13:19:02 INFO SparkEnv: Registering OutputCommitCoordinator
   23/01/19 13:19:02 INFO Utils: Successfully started service 'SparkUI' on port 4040.
   23/01/19 13:19:02 INFO SparkContext: Added JAR file:/tmp/spark-aff3b193-2214-4183-b374-38e30d7a5b28/kyuubi-spark-sql-engine_2.12-1.6.1-incubating.jar at spark://spark-25d8f985ca2ea186-driver-svc.kyuubi-jobs.svc:7078/jars/kyuubi-spark-sql-engine_2.12-1.6.1-incubating.jar with timestamp 1674134341366
   23/01/19 13:19:02 INFO SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
   23/01/19 13:19:03 WARN Utils: spark.executor.instances less than spark.dynamicAllocation.minExecutors is invalid, ignoring its setting, please update your configs.
   23/01/19 13:19:03 INFO Utils: Using initial executors = 2, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
   23/01/19 13:19:03 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes for ResourceProfile Id: 0, target: 2, known: 0, sharedSlotFromPendingPods: 2147483647.
   23/01/19 13:19:03 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : log4j.properties,metrics.properties,jmx_config.yml
   23/01/19 13:19:03 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : log4j.properties,metrics.properties,jmx_config.yml
   23/01/19 13:19:03 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
   23/01/19 13:19:03 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 7079.
   23/01/19 13:19:03 INFO NettyBlockTransferService: Server created on spark-25d8f985ca2ea186-driver-svc.kyuubi-jobs.svc:7079
   23/01/19 13:19:03 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
   23/01/19 13:19:03 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, spark-25d8f985ca2ea186-driver-svc.kyuubi-jobs.svc, 7079, None)
   23/01/19 13:19:03 INFO BlockManagerMasterEndpoint: Registering block manager spark-25d8f985ca2ea186-driver-svc.kyuubi-jobs.svc:7079 with 1048.8 MiB RAM, BlockManagerId(driver, spark-25d8f985ca2ea186-driver-svc.kyuubi-jobs.svc, 7079, None)
   23/01/19 13:19:03 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, spark-25d8f985ca2ea186-driver-svc.kyuubi-jobs.svc, 7079, None)
   23/01/19 13:19:03 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, spark-25d8f985ca2ea186-driver-svc.kyuubi-jobs.svc, 7079, None)
   23/01/19 13:19:03 INFO KubernetesClientUtils: Spark configuration files loaded from Some(/opt/spark/conf) : log4j.properties,metrics.properties,jmx_config.yml
   23/01/19 13:19:03 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
   23/01/19 13:19:03 INFO SingleEventLogFileWriter: Logging events to s3a://<logs_bucket_name>.inprogress
   23/01/19 13:19:04 WARN S3ABlockOutputStream: Application invoked the Syncable API against stream writing to shared/spark-logs/spark-85247ce9c71c432290615c69a192b85e.inprogress. This is unsupported
   23/01/19 13:19:04 WARN Utils: spark.executor.instances less than spark.dynamicAllocation.minExecutors is invalid, ignoring its setting, please update your configs.
   23/01/19 13:19:04 INFO Utils: Using initial executors = 2, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
   23/01/19 13:19:04 INFO ExecutorAllocationManager: Dynamic allocation is enabled without a shuffle service.
   23/01/19 13:19:08 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.10.110.198:59146) with ID 1,  ResourceProfileId 0
   23/01/19 13:19:08 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.10.81.195:57268) with ID 2,  ResourceProfileId 0
   23/01/19 13:19:08 INFO ExecutorMonitor: New executor 1 has registered (new total is 1)
   23/01/19 13:19:08 INFO ExecutorMonitor: New executor 2 has registered (new total is 2)
   23/01/19 13:19:08 INFO KubernetesClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
   23/01/19 13:19:08 INFO BlockManagerMasterEndpoint: Registering block manager 10.10.81.195:40511 with 413.9 MiB RAM, BlockManagerId(2, 10.10.81.195, 40511, None)
   23/01/19 13:19:08 INFO BlockManagerMasterEndpoint: Registering block manager 10.10.110.198:34749 with 413.9 MiB RAM, BlockManagerId(1, 10.10.110.198, 34749, None)
   23/01/19 13:19:08 INFO SharedState: Setting hive.metastore.warehouse.dir ('s3a://<bucket_name>') to the value of spark.sql.warehouse.dir.
   23/01/19 13:19:08 INFO SharedState: Warehouse path is 's3a://<bucket_name>'.
   23/01/19 13:19:08 WARN SQLConf: The SQL config 'spark.sql.legacy.parquet.datetimeRebaseModeInWrite' has been deprecated in Spark v3.2 and may be removed in the future. Use 'spark.sql.parquet.datetimeRebaseModeInWrite' instead.
   23/01/19 13:19:08 WARN SQLConf: The SQL config 'spark.sql.legacy.parquet.datetimeRebaseModeInWrite' has been deprecated in Spark v3.2 and may be removed in the future. Use 'spark.sql.parquet.datetimeRebaseModeInWrite' instead.
   23/01/19 13:19:08 WARN SQLConf: The SQL config 'spark.sql.legacy.parquet.datetimeRebaseModeInWrite' has been deprecated in Spark v3.2 and may be removed in the future. Use 'spark.sql.parquet.datetimeRebaseModeInWrite' instead.
   23/01/19 13:19:11 WARN SQLConf: The SQL config 'spark.sql.legacy.parquet.datetimeRebaseModeInWrite' has been deprecated in Spark v3.2 and may be removed in the future. Use 'spark.sql.parquet.datetimeRebaseModeInWrite' instead.
   23/01/19 13:19:12 INFO HiveUtils: Initializing HiveMetastoreConnection version 2.3.9 using Spark classes.
   23/01/19 13:19:12 WARN HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
   23/01/19 13:19:12 WARN HiveConf: HiveConf of name hive.stats.retries.wait does not exist
   23/01/19 13:19:12 INFO HiveClientImpl: Warehouse location for Hive client (version 2.3.9) is s3a://<bucket_name>
   23/01/19 13:19:12 WARN HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
   23/01/19 13:19:12 WARN HiveConf: HiveConf of name hive.stats.retries.wait does not exist
   23/01/19 13:19:12 INFO HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
   23/01/19 13:19:12 INFO ObjectStore: ObjectStore, initialize called
   23/01/19 13:19:12 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
   23/01/19 13:19:12 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
   23/01/19 13:19:13 WARN HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
   23/01/19 13:19:13 WARN HiveConf: HiveConf of name hive.stats.retries.wait does not exist
   23/01/19 13:19:13 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
   23/01/19 13:19:14 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is POSTGRES
   23/01/19 13:19:14 INFO ObjectStore: Initialized ObjectStore
   23/01/19 13:19:14 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.3.0
   23/01/19 13:19:14 WARN ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.3.0, comment = Set by MetaStore UNKNOWN@10.10.46.26
   23/01/19 13:19:14 INFO HiveMetaStore: Added admin role in metastore
   23/01/19 13:19:14 INFO HiveMetaStore: Added public role in metastore
   23/01/19 13:19:14 INFO HiveMetaStore: No user is added in admin role, since config is empty
   23/01/19 13:19:14 INFO HiveMetaStore: 0: get_database: default
   23/01/19 13:19:14 INFO audit: ugi=anonymous    ip=unknown-ip-addr    cmd=get_database: default    
   23/01/19 13:19:14 INFO HiveMetaStore: 0: get_databases: *
   23/01/19 13:19:14 INFO audit: ugi=anonymous    ip=unknown-ip-addr    cmd=get_databases: *    
   23/01/19 13:19:14 INFO CodeGenerator: Code generated in 218.772709 ms
   23/01/19 13:19:14 INFO CodeGenerator: Code generated in 11.306738 ms
   23/01/19 13:19:14 INFO CodeGenerator: Code generated in 15.345877 ms
   23/01/19 13:19:15 INFO SparkContext: Starting job: isEmpty at KyuubiSparkUtil.scala:48
   23/01/19 13:19:15 INFO DAGScheduler: Got job 0 (isEmpty at KyuubiSparkUtil.scala:48) with 1 output partitions
   23/01/19 13:19:15 INFO DAGScheduler: Final stage: ResultStage 0 (isEmpty at KyuubiSparkUtil.scala:48)
   23/01/19 13:19:15 INFO DAGScheduler: Parents of final stage: List()
   23/01/19 13:19:15 INFO DAGScheduler: Missing parents: List()
   23/01/19 13:19:15 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at isEmpty at KyuubiSparkUtil.scala:48), which has no missing parents
   23/01/19 13:19:15 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 6.7 KiB, free 1048.8 MiB)
   23/01/19 13:19:15 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 3.6 KiB, free 1048.8 MiB)
   23/01/19 13:19:15 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on spark-25d8f985ca2ea186-driver-svc.kyuubi-jobs.svc:7079 (size: 3.6 KiB, free: 1048.8 MiB)
   23/01/19 13:19:15 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1513
   23/01/19 13:19:15 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at isEmpty at KyuubiSparkUtil.scala:48) (first 15 tasks are for partitions Vector(0))
   23/01/19 13:19:15 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0
   23/01/19 13:19:15 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (10.10.81.195, executor 2, partition 0, PROCESS_LOCAL, 8496 bytes) taskResourceAssignments Map()
   23/01/19 13:19:15 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.10.81.195:40511 (size: 3.6 KiB, free: 413.9 MiB)
   23/01/19 13:19:16 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 997 ms on 10.10.81.195 (executor 2) (1/1)
   23/01/19 13:19:16 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
   23/01/19 13:19:16 INFO DAGScheduler: ResultStage 0 (isEmpty at KyuubiSparkUtil.scala:48) finished in 1.233 s
   23/01/19 13:19:16 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
   23/01/19 13:19:16 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished
   23/01/19 13:19:16 INFO DAGScheduler: Job 0 finished: isEmpty at KyuubiSparkUtil.scala:48, took 1.286901 s
   23/01/19 13:19:16 INFO ThreadUtils: SparkSQLSessionManager-exec-pool: pool size: 100, wait queue size: 100, thread keepalive time: 600000 ms
   23/01/19 13:19:16 INFO SparkSQLOperationManager: Service[SparkSQLOperationManager] is initialized.
   23/01/19 13:19:16 INFO SparkSQLSessionManager: Service[SparkSQLSessionManager] is initialized.
   23/01/19 13:19:16 INFO SparkSQLBackendService: Service[SparkSQLBackendService] is initialized.
   23/01/19 13:19:16 INFO SparkTBinaryFrontendService: Initializing SparkTBinaryFrontend on kyuubi-connection-spark-sql-anonymous-35d34b4c-e204-485d-b559-8:37197 with [9, 999] worker threads
   23/01/19 13:19:16 INFO CuratorFrameworkImpl: Starting
   23/01/19 13:19:16 INFO ZooKeeper: Client environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
   23/01/19 13:19:16 INFO ZooKeeper: Client environment:host.name=kyuubi-connection-spark-sql-anonymous-35d34b4c-e204-485d-b559-8
   23/01/19 13:19:16 INFO ZooKeeper: Client environment:java.version=11.0.15
   23/01/19 13:19:16 INFO ZooKeeper: Client environment:java.vendor=Oracle Corporation
   23/01/19 13:19:16 INFO ZooKeeper: Client environment:java.home=/usr/local/openjdk-11
   23/01/19 13:19:16 INFO ZooKeeper: Client environment:java.class.path=/opt/spark/conf/:/opt/spark/jars/bonecp-0.8.0.RELEASE.jar:/opt/spark/jars/osgi-resource-locator-1.0.3.jar:/opt/spark/jars/json-1.8.jar:/opt/spark/jars/parquet-column-1.12.2.jar:/opt/spark/jars/jersey-client-2.34.jar:/opt/spark/jars/snakeyaml-1.30.jar:/opt/spark/jars/jodd-core-3.5.2.jar:/opt/spark/jars/httpcore-4.4.14.jar:/opt/spark/jars/jersey-container-servlet-core-2.34.jar:/opt/spark/jars/jaxb-runtime-2.3.2.jar:/opt/spark/jars/netty-transport-native-kqueue-4.1.74.Final-osx-x86_64.jar:/opt/spark/jars/jackson-module-scala_2.12-2.13.3.jar:/opt/spark/jars/scala-xml_2.12-1.2.0.jar:/opt/spark/jars/jackson-mapper-asl-1.9.13.jar:/opt/spark/jars/jackson-databind-2.13.3.jar:/opt/spark/jars/spark-launcher_2.12-3.3.0.jar:/opt/spark/jars/kubernetes-model-core-5.12.2.jar:/opt/spark/jars/scala-compiler-2.12.15.jar:/opt/spark/jars/javassist-3.25.0-GA.jar:/opt/spark/jars/spark-kvstore_2.12-3.3.0.jar:/opt/spark/jars/hive-shims-0
 .23-2.3.9.jar:/opt/spark/jars/kubernetes-model-extensions-5.12.2.jar:/opt/spark/jars/spark-kubernetes_2.12-3.3.0.jar:/opt/spark/jars/commons-lang3-3.12.0.jar:/opt/spark/jars/scala-collection-compat_2.12-2.1.1.jar:/opt/spark/jars/guava-14.0.1.jar:/opt/spark/jars/istack-commons-runtime-3.0.8.jar:/opt/spark/jars/commons-compress-1.21.jar:/opt/spark/jars/metrics-jmx-4.2.7.jar:/opt/spark/jars/spark-network-common_2.12-3.3.0.jar:/opt/spark/jars/arrow-vector-7.0.0.jar:/opt/spark/jars/breeze-macros_2.12-1.2.jar:/opt/spark/jars/volcano-model-v1beta1-5.12.2.jar:/opt/spark/jars/chill_2.12-0.10.0.jar:/opt/spark/jars/velocity-1.5.jar:/opt/spark/jars/parquet-encoding-1.12.2.jar:/opt/spark/jars/log4j-1.2-api-2.17.2.jar:/opt/spark/jars/jakarta.ws.rs-api-2.1.6.jar:/opt/spark/jars/jta-1.1.jar:/opt/spark/jars/datanucleus-rdbms-4.1.19.jar:/opt/spark/jars/jersey-hk2-2.34.jar:/opt/spark/jars/algebra_2.12-2.0.1.jar:/opt/spark/jars/jakarta.annotation-api-1.3.5.jar:/opt/spark/jars/rocksdbjni-6.20.3.jar:/opt
 /spark/jars/kubernetes-model-certificates-5.12.2.jar:/opt/spark/jars/arrow-format-7.0.0.jar:/opt/spark/jars/okhttp-3.12.12.jar:/opt/spark/jars/commons-collections-3.2.2.jar:/opt/spark/jars/netty-transport-classes-epoll-4.1.74.Final.jar:/opt/spark/jars/commons-collections4-4.4.jar:/opt/spark/jars/curator-recipes-2.13.0.jar:/opt/spark/jars/json4s-scalap_2.12-3.7.0-M11.jar:/opt/spark/jars/metrics-graphite-4.2.7.jar:/opt/spark/jars/jackson-annotations-2.13.3.jar:/opt/spark/jars/jersey-common-2.34.jar:/opt/spark/jars/spark-catalyst_2.12-3.3.0.jar:/opt/spark/jars/volcano-client-5.12.2.jar:/opt/spark/jars/zookeeper-3.6.2.jar:/opt/spark/jars/hive-shims-2.3.9.jar:/opt/spark/jars/antlr-runtime-3.5.2.jar:/opt/spark/jars/derby-10.14.2.0.jar:/opt/spark/jars/kubernetes-model-scheduling-5.12.2.jar:/opt/spark/jars/spark-mllib-local_2.12-3.3.0.jar:/opt/spark/jars/javolution-5.5.1.jar:/opt/spark/jars/janino-3.0.16.jar:/opt/spark/jars/kubernetes-model-admissionregistration-5.12.2.jar:/opt/spark/jars/p
 arquet-hadoop-1.12.2.jar:/opt/spark/jars/JTransforms-3.1.jar:/opt/spark/jars/avro-mapred-1.11.0.jar:/opt/spark/jars/spark-network-shuffle_2.12-3.3.0.jar:/opt/spark/jars/kubernetes-model-rbac-5.12.2.jar:/opt/spark/jars/metrics-json-4.2.7.jar:/opt/spark/jars/log4j-core-2.17.2.jar:/opt/spark/jars/kubernetes-model-apiextensions-5.12.2.jar:/opt/spark/jars/ST4-4.0.4.jar:/opt/spark/jars/spark-streaming_2.12-3.3.0.jar:/opt/spark/jars/core-1.1.2.jar:/opt/spark/jars/hive-shims-common-2.3.9.jar:/opt/spark/jars/okio-1.14.0.jar:/opt/spark/jars/shapeless_2.12-2.3.7.jar:/opt/spark/jars/netty-tcnative-classes-2.0.48.Final.jar:/opt/spark/jars/hadoop-client-api-3.3.4.jar:/opt/spark/jars/automaton-1.11-8.jar:/opt/spark/jars/hk2-locator-2.6.1.jar:/opt/spark/jars/snappy-java-1.1.8.4.jar:/opt/spark/jars/avro-1.11.0.jar:/opt/spark/jars/hive-vector-code-gen-2.3.9.jar:/opt/spark/jars/jakarta.servlet-api-4.0.3.jar:/opt/spark/jars/paranamer-2.8.jar:/opt/spark/jars/jline-2.14.6.jar:/opt/spark/jars/arrow-memory
 -core-7.0.0.jar:/opt/spark/jars/spark-sql_2.12-3.3.0.jar:/opt/spark/jars/hive-shims-scheduler-2.3.9.jar:/opt/spark/jars/commons-pool-1.5.4.jar:/opt/spark/jars/activation-1.1.1.jar:/opt/spark/jars/netty-transport-4.1.74.Final.jar:/opt/spark/jars/spire-util_2.12-0.17.0.jar:/opt/spark/jars/kubernetes-model-storageclass-5.12.2.jar:/opt/spark/jars/arrow-memory-netty-7.0.0.jar:/opt/spark/jars/aopalliance-repackaged-2.6.1.jar:/opt/spark/jars/spark-hive_2.12-3.3.0.jar:/opt/spark/jars/libfb303-0.9.3.jar:/opt/spark/jars/kryo-shaded-4.0.2.jar:/opt/spark/jars/json4s-ast_2.12-3.7.0-M11.jar:/opt/spark/jars/tink-1.6.1.jar:/opt/spark/jars/arpack-2.2.1.jar:/opt/spark/jars/spark-sketch_2.12-3.3.0.jar:/opt/spark/jars/antlr4-runtime-4.8.jar:/opt/spark/jars/netty-all-4.1.74.Final.jar:/opt/spark/jars/netty-transport-native-kqueue-4.1.74.Final-osx-aarch_64.jar:/opt/spark/jars/metrics-core-4.2.7.jar:/opt/spark/jars/commons-crypto-1.1.0.jar:/opt/spark/jars/spark-unsafe_2.12-3.3.0.jar:/opt/spark/jars/objenes
 is-3.2.jar:/opt/spark/jars/stream-2.9.6.jar:/opt/spark/jars/jackson-dataformat-yaml-2.13.3.jar:/opt/spark/jars/jul-to-slf4j-1.7.32.jar:/opt/spark/jars/lapack-2.2.1.jar:/opt/spark/jars/zjsonpatch-0.3.0.jar:/opt/spark/jars/orc-shims-1.7.4.jar:/opt/spark/jars/spire-platform_2.12-0.17.0.jar:/opt/spark/jars/netty-common-4.1.74.Final.jar:/opt/spark/jars/spire_2.12-0.17.0.jar:/opt/spark/jars/netty-transport-native-epoll-4.1.74.Final-linux-aarch_64.jar:/opt/spark/jars/jdo-api-3.0.1.jar:/opt/spark/jars/hive-metastore-2.3.9.jar:/opt/spark/jars/scala-library-2.12.15.jar:/opt/spark/jars/jackson-core-asl-1.9.13.jar:/opt/spark/jars/netty-transport-native-unix-common-4.1.74.Final.jar:/opt/spark/jars/threeten-extra-1.5.0.jar:/opt/spark/jars/jackson-datatype-jsr310-2.13.3.jar:/opt/spark/jars/netty-resolver-4.1.74.Final.jar:/opt/spark/jars/lz4-java-1.8.0.jar:/opt/spark/jars/kubernetes-model-networking-5.12.2.jar:/opt/spark/jars/kubernetes-client-5.12.2.jar:/opt/spark/jars/jersey-container-servlet-2.3
 4.jar:/opt/spark/jars/hk2-api-2.6.1.jar:/opt/spark/jars/commons-dbcp-1.4.jar:/opt/spark/jars/commons-codec-1.15.jar:/opt/spark/jars/pickle-1.2.jar:/opt/spark/jars/kubernetes-model-coordination-5.12.2.jar:/opt/spark/jars/libthrift-0.12.0.jar:/opt/spark/jars/spark-tags_2.12-3.3.0-tests.jar:/opt/spark/jars/HikariCP-2.5.1.jar:/opt/spark/jars/univocity-parsers-2.9.1.jar:/opt/spark/jars/shims-0.9.25.jar:/opt/spark/jars/stax-api-1.0.1.jar:/opt/spark/jars/netty-transport-classes-kqueue-4.1.74.Final.jar:/opt/spark/jars/log4j-api-2.17.2.jar:/opt/spark/jars/RoaringBitmap-0.9.25.jar:/opt/spark/jars/parquet-common-1.12.2.jar:/opt/spark/jars/netty-handler-4.1.74.Final.jar:/opt/spark/jars/spark-repl_2.12-3.3.0.jar:/opt/spark/jars/netty-buffer-4.1.74.Final.jar:/opt/spark/jars/spark-mllib_2.12-3.3.0.jar:/opt/spark/jars/kubernetes-model-flowcontrol-5.12.2.jar:/opt/spark/jars/oro-2.0.8.jar:/opt/spark/jars/annotations-17.0.0.jar:/opt/spark/jars/flatbuffers-java-1.12.0.jar:/opt/spark/jars/xz-1.8.jar:/op
 t/spark/jars/commons-compiler-3.0.16.jar:/opt/spark/jars/commons-math3-3.6.1.jar:/opt/spark/jars/zookeeper-jute-3.6.2.jar:/opt/spark/jars/hive-serde-2.3.9.jar:/opt/spark/jars/opencsv-2.3.jar:/opt/spark/jars/generex-1.0.2.jar:/opt/spark/jars/JLargeArrays-1.5.jar:/opt/spark/jars/json4s-jackson_2.12-3.7.0-M11.jar:/opt/spark/jars/audience-annotations-0.5.0.jar:/opt/spark/jars/commons-cli-1.5.0.jar:/opt/spark/jars/py4j-0.10.9.5.jar:/opt/spark/jars/hive-llap-common-2.3.9.jar:/opt/spark/jars/curator-framework-2.13.0.jar:/opt/spark/jars/cats-kernel_2.12-2.1.1.jar:/opt/spark/jars/jcl-over-slf4j-1.7.32.jar:/opt/spark/jars/httpclient-4.5.13.jar:/opt/spark/jars/netty-codec-4.1.74.Final.jar:/opt/spark/jars/commons-lang-2.6.jar:/opt/spark/jars/curator-client-2.13.0.jar:/opt/spark/jars/jakarta.xml.bind-api-2.3.2.jar:/opt/spark/jars/jsr305-3.0.0.jar:/opt/spark/jars/log4j-slf4j-impl-2.17.2.jar:/opt/spark/jars/datanucleus-core-4.1.17.jar:/opt/spark/jars/kubernetes-model-node-5.12.2.jar:/opt/spark/jar
 s/logging-interceptor-3.12.12.jar:/opt/spark/jars/slf4j-api-1.7.32.jar:/opt/spark/jars/commons-text-1.9.jar:/opt/spark/jars/kubernetes-model-events-5.12.2.jar:/opt/spark/jars/blas-2.2.1.jar:/opt/spark/jars/scala-parser-combinators_2.12-1.1.2.jar:/opt/spark/jars/protobuf-java-2.5.0.jar:/opt/spark/jars/zstd-jni-1.5.2-1.jar:/opt/spark/jars/netty-transport-native-epoll-4.1.74.Final-linux-x86_64.jar:/opt/spark/jars/jersey-server-2.34.jar:/opt/spark/jars/orc-core-1.7.4.jar:/opt/spark/jars/transaction-api-1.1.jar:/opt/spark/jars/hive-exec-2.3.9-core.jar:/opt/spark/jars/spire-macros_2.12-0.17.0.jar:/opt/spark/jars/hadoop-client-runtime-3.3.4.jar:/opt/spark/jars/parquet-format-structures-1.12.2.jar:/opt/spark/jars/datanucleus-api-jdo-4.2.4.jar:/opt/spark/jars/kubernetes-model-metrics-5.12.2.jar:/opt/spark/jars/aircompressor-0.21.jar:/opt/spark/jars/parquet-jackson-1.12.2.jar:/opt/spark/jars/hk2-utils-2.6.1.jar:/opt/spark/jars/kubernetes-model-batch-5.12.2.jar:/opt/spark/jars/kubernetes-model
 -autoscaling-5.12.2.jar:/opt/spark/jars/kubernetes-model-policy-5.12.2.jar:/opt/spark/jars/orc-mapreduce-1.7.4.jar:/opt/spark/jars/avro-ipc-1.11.0.jar:/opt/spark/jars/spark-graphx_2.12-3.3.0.jar:/opt/spark/jars/kubernetes-model-apps-5.12.2.jar:/opt/spark/jars/javax.jdo-3.2.0-m3.jar:/opt/spark/jars/spark-core_2.12-3.3.0.jar:/opt/spark/jars/jakarta.validation-api-2.0.2.jar:/opt/spark/jars/commons-io-2.11.0.jar:/opt/spark/jars/compress-lzf-1.1.jar:/opt/spark/jars/arpack_combined_all-0.1.jar:/opt/spark/jars/hive-storage-api-2.7.2.jar:/opt/spark/jars/kubernetes-model-common-5.12.2.jar:/opt/spark/jars/hive-common-2.3.9.jar:/opt/spark/jars/jackson-core-2.13.3.jar:/opt/spark/jars/joda-time-2.10.13.jar:/opt/spark/jars/gson-2.2.4.jar:/opt/spark/jars/leveldbjni-all-1.8.jar:/opt/spark/jars/dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:/opt/spark/jars/chill-java-0.10.0.jar:/opt/spark/jars/breeze_2.12-1.2.jar:/opt/spark/jars/json4s-core_2.12-3.7.0-M11.jar:/opt/spark/jars/kubernetes-model-
 discovery-5.12.2.jar:/opt/spark/jars/spark-tags_2.12-3.3.0.jar:/opt/spark/jars/jakarta.inject-2.6.1.jar:/opt/spark/jars/minlog-1.3.0.jar:/opt/spark/jars/xbean-asm9-shaded-4.20.jar:/opt/spark/jars/ivy-2.5.0.jar:/opt/spark/jars/scala-reflect-2.12.15.jar:/opt/spark/jars/commons-logging-1.1.3.jar:/opt/spark/jars/metrics-jvm-4.2.7.jar:/opt/spark/jars/uap-scala_2.12-0.11.0.jar:/opt/spark/jars/common-config-5.3.4.jar:/opt/spark/jars/hive-service-rpc-3.1.2.jar:/opt/spark/jars/spark-avro_2.12-3.1.2.jar:/opt/spark/jars/jackson-annotations-2.9.10.jar:/opt/spark/jars/spark-sql-kafka-0-10_2.12-3.1.2.jar:/opt/spark/jars/commons-pool2-2.6.2.jar:/opt/spark/jars/avro-1.8.1.jar:/opt/spark/jars/kafka-clients-2.6.0.jar:/opt/spark/jars/jmx_prometheus_javaagent-0.16.1.jar:/opt/spark/jars/slf4j-api-1.7.30.jar:/opt/spark/jars/paranamer-2.7.jar:/opt/spark/jars/zkclient-0.10.jar:/opt/spark/jars/commons_2.12-0.1.0.jar:/opt/spark/jars/postgresql-42.2.20.jar:/opt/spark/jars/spark-token-provider-kafka-0-10_2.12-
 3.1.2.jar:/opt/spark/jars/hadoop-aws-3.3.4.jar:/opt/spark/jars/lz4-java-1.7.1.jar:/opt/spark/jars/jackson-core-2.9.10.jar:/opt/spark/jars/abris_2.12-5.1.1.jar:/opt/spark/jars/jackson-databind-2.9.10.5.jar:/opt/spark/jars/snappy-java-1.1.8.2.jar:/opt/spark/jars/spotbugs-annotations-3.1.8.jar:/opt/spark/jars/zookeeper-3.4.14.jar:/opt/spark/jars/common-utils-5.3.4.jar:/opt/spark/jars/aws-java-sdk-bundle-1.12.81.jar:/opt/spark/jars/commons-compress-1.8.1.jar:/opt/spark/jars/joda-time-2.10.6.jar:/opt/spark/jars/xz-1.5.jar:/opt/spark/jars/zstd-jni-1.4.8-1.jar:/opt/spark/jars/kafka-avro-serializer-5.3.4.jar:/opt/spark/jars/netty-3.10.6.Final.jar:/opt/spark/jars/unused-1.0.0.jar:/opt/spark/jars/kafka-schema-registry-client-5.3.4.jar:/opt/spark/jars/jline-0.9.94.jar:/opt/hadoop/conf/
   23/01/19 13:19:16 INFO ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
   23/01/19 13:19:16 INFO ZooKeeper: Client environment:java.io.tmpdir=/tmp
   23/01/19 13:19:16 INFO ZooKeeper: Client environment:java.compiler=<NA>
   23/01/19 13:19:16 INFO ZooKeeper: Client environment:os.name=Linux
   23/01/19 13:19:16 INFO ZooKeeper: Client environment:os.arch=amd64
   23/01/19 13:19:16 INFO ZooKeeper: Client environment:os.version=4.15.0-192-generic
   23/01/19 13:19:16 INFO ZooKeeper: Client environment:user.name=185
   23/01/19 13:19:16 INFO ZooKeeper: Client environment:user.home=/opt/spark
   23/01/19 13:19:16 INFO ZooKeeper: Client environment:user.dir=/opt/spark/work-dir
   23/01/19 13:19:16 INFO ZooKeeper: Initiating client connection, connectString=kyuubi-zookeeper.zookeeper:2181 sessionTimeout=60000000 watcher=org.apache.kyuubi.shade.org.apache.curator.ConnectionState@fffdd40
   23/01/19 13:19:16 INFO EngineServiceDiscovery: Service[EngineServiceDiscovery] is initialized.
   23/01/19 13:19:16 INFO SparkTBinaryFrontendService: Service[SparkTBinaryFrontend] is initialized.
   23/01/19 13:19:16 INFO SparkSQLEngine: Service[SparkSQLEngine] is initialized.
   23/01/19 13:19:16 INFO ClientCnxn: Opening socket connection to server kyuubi-zookeeper.zookeeper/10.108.54.123:2181. Will not attempt to authenticate using SASL (unknown error)
   23/01/19 13:19:16 INFO ClientCnxn: Socket connection established to kyuubi-zookeeper.zookeeper/10.108.54.123:2181, initiating session
   23/01/19 13:19:16 INFO SparkSQLOperationManager: Service[SparkSQLOperationManager] is started.
   23/01/19 13:19:16 INFO SparkSQLSessionManager: Service[SparkSQLSessionManager] is started.
   23/01/19 13:19:16 INFO SparkSQLBackendService: Service[SparkSQLBackendService] is started.
   23/01/19 13:19:16 INFO ClientCnxn: Session establishment complete on server kyuubi-zookeeper.zookeeper/10.108.54.123:2181, sessionid = 0x200720fd76819a6, negotiated timeout = 40000
   23/01/19 13:19:16 INFO ConnectionStateManager: State change: CONNECTED
   23/01/19 13:19:16 INFO ZookeeperDiscoveryClient: Zookeeper client connection state changed to: CONNECTED
   23/01/19 13:19:16 INFO ZookeeperDiscoveryClient: Created a /kyuubi-de_1.6.1-incubating_CONNECTION_SPARK_SQL/anonymous/35d34b4c-e204-485d-b559-8afde030e936/serviceUri=10.10.46.26:37197;version=1.6.1-incubating;refId=35d34b4c-e204-485d-b559-8afde030e936;sequence=0000000000 on ZooKeeper for KyuubiServer uri: 10.10.46.26:37197
   23/01/19 13:19:16 INFO EngineServiceDiscovery: Service[EngineServiceDiscovery] is started.
   23/01/19 13:19:16 INFO SparkTBinaryFrontendService: Service[SparkTBinaryFrontend] is started.
   23/01/19 13:19:16 INFO SparkSQLEngine: Service[SparkSQLEngine] is started.
   23/01/19 13:19:16 INFO SparkSQLEngine: 
       Spark application name: kyuubi_CONNECTION_SPARK_SQL_anonymous_35d34b4c-e204-485d-b559-8afde030e936
             application ID:  spark-85247ce9c71c432290615c69a192b85e
             application tags: 
             application web UI: http://spark-25d8f985ca2ea186-driver-svc.kyuubi-jobs.svc:4040
             master: k8s://https://kubernetes.default.svc:443
             version: 3.3.0
             driver: [cpu: 1, mem: 2g]
             executor: [cpu: 2, mem: 1g, maxNum: 20]
       Start time: Thu Jan 19 13:19:01 UTC 2023
       
       User: anonymous (shared mode: CONNECTION)
       State: STARTED
       
   23/01/19 13:19:16 INFO SparkTBinaryFrontendService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V10
   23/01/19 13:19:16 INFO SparkSQLSessionManager: Opening session for anonymous@10.10.51.9
   23/01/19 13:19:16 WARN SparkSessionImpl:  Cannot modify the value of a Spark config: spark.driver.memory. See also 'https://spark.apache.org/docs/latest/sql-migration-guide.html#ddl-statements'        
   23/01/19 13:19:16 INFO HiveMetaStore: 1: get_database: global_temp
   23/01/19 13:19:16 INFO audit: ugi=anonymous    ip=unknown-ip-addr    cmd=get_database: global_temp    
   23/01/19 13:19:16 WARN HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
   23/01/19 13:19:16 WARN HiveConf: HiveConf of name hive.stats.retries.wait does not exist
   23/01/19 13:19:16 INFO HiveMetaStore: 1: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
   23/01/19 13:19:16 INFO ObjectStore: ObjectStore, initialize called
   23/01/19 13:19:16 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is POSTGRES
   23/01/19 13:19:16 INFO ObjectStore: Initialized ObjectStore
   23/01/19 13:19:16 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
   23/01/19 13:19:16 INFO HiveMetaStore: 1: get_database: default
   23/01/19 13:19:16 INFO audit: ugi=anonymous    ip=unknown-ip-addr    cmd=get_database: default    
   23/01/19 13:19:16 WARN SparkSessionImpl:  Cannot modify the value of a Spark config: spark.kubernetes.executor.podNamePrefix. See also 'https://spark.apache.org/docs/latest/sql-migration-guide.html#ddl-statements'        
   23/01/19 13:19:16 WARN SparkSessionImpl:  Cannot modify the value of a Spark config: spark.dynamicAllocation.minExecutors. See also 'https://spark.apache.org/docs/latest/sql-migration-guide.html#ddl-statements'        
   23/01/19 13:19:17 INFO BlockManagerInfo: Removed broadcast_0_piece0 on spark-25d8f985ca2ea186-driver-svc.kyuubi-jobs.svc:7079 in memory (size: 3.6 KiB, free: 1048.8 MiB)
   23/01/19 13:19:17 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 10.10.81.195:40511 in memory (size: 3.6 KiB, free: 413.9 MiB)
   23/01/19 13:19:17 INFO SparkSQLSessionManager: anonymous's session with SessionHandle [0d8be737-c71d-4870-954b-831835390f09] is opened, current opening sessions 1
   23/01/19 13:19:17 INFO GetCurrentCatalog: Processing anonymous's query[54bdff01-9247-4930-9965-c1d925338f5f]: INITIALIZED_STATE -> RUNNING_STATE, statement:
   GetCurrentCatalog
   23/01/19 13:19:17 INFO GetCurrentCatalog: Processing anonymous's query[54bdff01-9247-4930-9965-c1d925338f5f]: RUNNING_STATE -> FINISHED_STATE, time taken: 0.004 seconds
   23/01/19 13:19:17 ERROR SparkTBinaryFrontendService: Error fetching results: 
   org.apache.kyuubi.KyuubiSQLException: OperationHandle [54bdff01-9247-4930-9965-c1d925338f5f] failed to generate operation log
       at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
       at org.apache.kyuubi.operation.OperationManager.$anonfun$getOperationLogRowSet$2(OperationManager.scala:146)
       at scala.Option.getOrElse(Option.scala:189)
       at org.apache.kyuubi.operation.OperationManager.getOperationLogRowSet(OperationManager.scala:146)
       at org.apache.kyuubi.session.AbstractSession.fetchResults(AbstractSession.scala:236)
       at org.apache.kyuubi.service.AbstractBackendService.fetchResults(AbstractBackendService.scala:204)
       at org.apache.kyuubi.service.TFrontendService.FetchResults(TFrontendService.scala:520)
       at org.apache.kyuubi.shade.org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)
       at org.apache.kyuubi.shade.org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1822)
       at org.apache.kyuubi.shade.org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
       at org.apache.kyuubi.shade.org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
       at org.apache.kyuubi.service.authentication.TSetIpAddressProcessor.process(TSetIpAddressProcessor.scala:36)
       at org.apache.kyuubi.shade.org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
       at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
       at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
       at java.base/java.lang.Thread.run(Unknown Source)
   23/01/19 13:19:17 ERROR SparkTBinaryFrontendService: Error fetching results: 
   org.apache.kyuubi.KyuubiSQLException: OperationHandle [54bdff01-9247-4930-9965-c1d925338f5f] failed to generate operation log
       at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
       at org.apache.kyuubi.operation.OperationManager.$anonfun$getOperationLogRowSet$2(OperationManager.scala:146)
       at scala.Option.getOrElse(Option.scala:189)
       at org.apache.kyuubi.operation.OperationManager.getOperationLogRowSet(OperationManager.scala:146)
       at org.apache.kyuubi.session.AbstractSession.fetchResults(AbstractSession.scala:236)
       at org.apache.kyuubi.service.AbstractBackendService.fetchResults(AbstractBackendService.scala:204)
       at org.apache.kyuubi.service.TFrontendService.FetchResults(TFrontendService.scala:520)
       at org.apache.kyuubi.shade.org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)
       at org.apache.kyuubi.shade.org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1822)
       at org.apache.kyuubi.shade.org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
       at org.apache.kyuubi.shade.org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
       at org.apache.kyuubi.service.authentication.TSetIpAddressProcessor.process(TSetIpAddressProcessor.scala:36)
       at org.apache.kyuubi.shade.org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
       at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
       at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
       at java.base/java.lang.Thread.run(Unknown Source)
   23/01/19 13:19:17 INFO GetTypeInfo: Processing anonymous's query[594236bd-4d08-42be-8eaf-7e8f5fc2b831]: INITIALIZED_STATE -> RUNNING_STATE, statement:
   GetTypeInfo
   23/01/19 13:19:17 INFO GetTypeInfo: Processing anonymous's query[594236bd-4d08-42be-8eaf-7e8f5fc2b831]: RUNNING_STATE -> FINISHED_STATE, time taken: 0.001 seconds
   23/01/19 13:19:17 INFO GetCatalogs: Processing anonymous's query[fcbef772-348c-4523-8708-00d856ba2121]: INITIALIZED_STATE -> RUNNING_STATE, statement:
   GetCatalogs
   23/01/19 13:19:17 INFO GetCatalogs: Processing anonymous's query[fcbef772-348c-4523-8708-00d856ba2121]: RUNNING_STATE -> FINISHED_STATE, time taken: 0.004 seconds
   23/01/19 13:20:07 INFO SparkTBinaryFrontendService: Received request of closing SessionHandle [0d8be737-c71d-4870-954b-831835390f09]
   23/01/19 13:20:07 INFO SparkSQLSessionManager: SessionHandle [0d8be737-c71d-4870-954b-831835390f09] is closed, current opening sessions 0
   23/01/19 13:20:07 INFO SparkSQLSessionManager: Session stopped due to shared level is Connection.
   23/01/19 13:20:07 INFO SparkSQLEngine: Service: [SparkTBinaryFrontend] is stopping.
   23/01/19 13:20:07 INFO SparkTBinaryFrontendService: Service: [EngineServiceDiscovery] is stopping.
   23/01/19 13:20:07 WARN ZookeeperDiscoveryClient: This Kyuubi instance 10.10.46.26:37197 is now de-registered from ZooKeeper. The server will be shut down after the last client session completes.
   23/01/19 13:20:07 INFO EngineServiceDiscovery: Clean up discovery service due to this is connection share level.
   23/01/19 13:20:07 INFO CuratorFrameworkImpl: backgroundOperationsLoop exiting
   23/01/19 13:20:07 INFO ZooKeeper: Session: 0x200720fd76819a6 closed
   23/01/19 13:20:07 INFO ClientCnxn: EventThread shut down for session: 0x200720fd76819a6
   23/01/19 13:20:07 INFO EngineServiceDiscovery: Service[EngineServiceDiscovery] is stopped.
   23/01/19 13:20:07 INFO SparkTBinaryFrontendService: Service[SparkTBinaryFrontend] is stopped.
   23/01/19 13:20:07 INFO SparkTBinaryFrontendService: SparkTBinaryFrontend has stopped
   23/01/19 13:20:07 INFO SparkSQLEngine: Service: [SparkSQLBackendService] is stopping.
   23/01/19 13:20:07 INFO SparkSQLBackendService: Service: [SparkSQLSessionManager] is stopping.
   23/01/19 13:20:07 INFO SparkSQLSessionManager: Service: [SparkSQLOperationManager] is stopping.
   23/01/19 13:20:07 INFO SparkSQLOperationManager: Service[SparkSQLOperationManager] is stopped.
   23/01/19 13:20:07 INFO SparkSQLSessionManager: Service[SparkSQLSessionManager] is stopped.
   23/01/19 13:20:07 INFO SparkSQLBackendService: Service[SparkSQLBackendService] is stopped.
   23/01/19 13:20:07 INFO SparkSQLEngine: Service[SparkSQLEngine] is stopped.
   23/01/19 13:20:07 INFO SparkTBinaryFrontendService: Finished closing SessionHandle [0d8be737-c71d-4870-954b-831835390f09]
   23/01/19 13:20:07 INFO SparkUI: Stopped Spark web UI at http://spark-25d8f985ca2ea186-driver-svc.kyuubi-jobs.svc:4040
   23/01/19 13:20:07 INFO KubernetesClusterSchedulerBackend: Shutting down all executors
   23/01/19 13:20:07 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asking each executor to shut down
   23/01/19 13:20:07 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed.
   23/01/19 13:20:08 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
   23/01/19 13:20:08 INFO MemoryStore: MemoryStore cleared
   23/01/19 13:20:08 INFO BlockManager: BlockManager stopped
   23/01/19 13:20:08 INFO BlockManagerMaster: BlockManagerMaster stopped
   23/01/19 13:20:08 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
   23/01/19 13:20:08 INFO SparkContext: Successfully stopped SparkContext
   23/01/19 13:20:08 INFO ShutdownHookManager: Shutdown hook called
   23/01/19 13:20:08 INFO ShutdownHookManager: Deleting directory /var/data/spark-250920f8-2aa9-4458-99de-9aa60e944508/spark-3f250941-abb3-43cb-85dd-f42cbd079f4e
   23/01/19 13:20:08 INFO ShutdownHookManager: Deleting directory /tmp/spark-2c6ca710-f0ef-4024-b373-e3d24d67b819
   23/01/19 13:20:08 INFO ShutdownHookManager: Deleting directory /tmp/spark-aff3b193-2214-4183-b374-38e30d7a5b28
   23/01/19 13:20:08 INFO ShutdownHookManager: Deleting directory /tmp/spark-d5fd5670-e8c9-465a-b657-dbf1ab4b83bd
   23/01/19 13:20:08 INFO ShutdownHookManager: Deleting directory /tmp/spark-9c76c8ef-8bf3-4436-8cde-a9cf8a428ec8
   23/01/19 13:20:08 INFO MetricsSystemImpl: Stopping s3a-file-system metrics system...
   23/01/19 13:20:08 INFO MetricsSystemImpl: s3a-file-system metrics system stopped.
   23/01/19 13:20:08 INFO MetricsSystemImpl: s3a-file-system metrics system shutdown complete.
   Stream closed EOF for kyuubi-jobs/kyuubi-connection-spark-sql-anonymous-35d34b4c-e204-485d-b559-8afde030e936-3d6f5585ca2e9c90-driver (kyuubi-spark-driver)
   ```
   
   
   ### Kyuubi Server Configurations
   
   ```yaml
   # Kyuubi configurations https://kyuubi.apache.org/docs/latest/deployment/settings.html
       kyuubi.authentication=NONE
       kyuubi.engine.share.level=CONNECTION
       kyuubi.frontend.bind.host={{ .Values.server.bind.host }}
       kyuubi.frontend.bind.port={{ .Values.server.bind.port }}
       kyuubi.metrics.reporters=PROMETHEUS
       # Zookeeper required for kyuubi HA
       kyuubi.ha.enabled=true
       kyuubi.ha.addresses=kyuubi-zookeeper.zookeeper:2181
       # Required for k8s CLUSTER mode
       kyuubi.frontend.connection.url.use.hostname=false
       # Timeouts
       kyuubi.backend.engine.exec.pool.keepalive.time=PT10M
       kyuubi.backend.server.exec.pool.keepalive.time=PT10M
       kyuubi.batch.application.check.interval=PT10M
       kyuubi.engine.user.isolated.spark.session.idle.interval=PT10M
       kyuubi.frontend.thrift.worker.keepalive.time=PT10M
       kyuubi.ha.zookeeper.session.timeout=60000000
       kyuubi.zookeeper.embedded.min.session.timeout=600000
       kyuubi.zookeeper.embedded.max.session.timeout=6000000
       kyuubi.zookeeper.embedded.tick.time=30000
       kyuubi.session.engine.alive.timeout=PT10M
       kyuubi.session.engine.idle.timeout=PT10M
       kyuubi.session.engine.check.interval=PT10M
       kyuubi.session.engine.alive.probe.interval=PT10M
       kyuubi.session.engine.login.timeout=PT10M
       kyuubi.session.check.interval=PT10M
       kyuubi.session.idle.timeout=PT10M
       kyuubi.session.engine.startup.waitCompletion=true
       
       # Spark configurations
       spark.master=k8s://https://kubernetes.default.svc:443
       spark.submit.deployMode=cluster
       spark.kubernetes.container.image=harbor.dwh.runit.cc/de-image-spark/spark:v3.0.65
       spark.kubernetes.namespace={{ .Values.sparkNamespace }}
       spark.kubernetes.authenticate.driver.serviceAccountName={{ .Release.Name }}-kyuubi
       spark.kubernetes.authenticate.serviceAccountName={{ .Release.Name }}-kyuubi
       # Cert & token required to connect to k8s api
       spark.kubernetes.authenticate.caCertFile=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
       spark.kubernetes.authenticate.oauthTokenFile=/var/run/secrets/kubernetes.io/serviceaccount/token
       # Required for k8s CLUSTER mode, to transfer kyuubi sqlengine jar to driver pod
       spark.kubernetes.file.upload.path=s3a://<bucket_name>
       spark.dynamicAllocation.enabled=true
       spark.dynamicAllocation.minExecutors=2
       spark.dynamicAllocation.maxExecutors=20
       spark.dynamicAllocation.initialExecutors=2
       spark.dynamicAllocation.shuffleTracking.enabled=true
       spark.dynamicAllocation.executorAllocationRatio=1
       spark.kubernetes.driver.podNamePrefix=kyuubi-{{ .Release.Name }}
       spark.kubernetes.executor.podNamePrefix=kyuubi-{{ .Release.Name }}
       kyuubi.ha.zookeeper.namespace=kyuubi-{{ .Release.Name }}
       spark.driver.memory=4g
       # Timeouts
       spark.dynamicAllocation.executorIdleTimeout=600s
   
       # S3 
       spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
       spark.hadoop.fs.s3a.connection.ssl.enabled=true
       spark.hadoop.fs.s3a.fast.upload=true
       spark.driver.extraJavaOptions=-Divy.cache.dir=/tmp -Divy.home=/tmp
       spark.hadoop.fs.s3a.endpoint=https://<endpoint>
       spark.hadoop.fs.s3a.downgrade.syncable.exceptions=true
       spark.hadoop.fs.s3a.change.detection.mode=warn
       
       # Logs
       spark.eventLog.dir=s3a://<logs_bucket_name>
       spark.eventLog.enabled=true
       spark.eventLog.rotation.enabled=true
       spark.eventLog.rotation.interval=3600
       spark.eventLog.rotation.minFileSize=100m
       spark.eventLog.rotation.maxFilesToRetain=2
       
       # Spark SQL
       spark.sql.adaptive.enabled=true
       spark.sql.broadcastTimeout=30000
       spark.sql.legacy.parquet.datetimeRebaseModeInWrite=LEGACY
       spark.sql.parquet.mergeSchema=true
       spark.sql.sources.partitionOverwriteMode=dynamic
       spark.sql.warehouse.dir={{ .Values.spark_sql_warehouse_dir }}
       
       # path to spark pod template
       spark.kubernetes.driver.podTemplateFile=/opt/kyuubi/conf/pod_template.yml
       spark.kubernetes.executor.podTemplateFile=/opt/kyuubi/conf/pod_template.yml
   ```
   
   
   ### Kyuubi Engine Configurations
   
   _No response_
   
   ### Additional context
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes. I would be willing to submit a PR with guidance from the Kyuubi community to fix.
   - [X] No. I cannot submit a PR at this time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org
For additional commands, e-mail: notifications-help@kyuubi.apache.org


[GitHub] [kyuubi] github-actions[bot] commented on issue #4194: [Bug] Unable to adjust kyuubi driver pod idle timeout.

Posted by GitBox <gi...@apache.org>.
github-actions[bot] commented on issue #4194:
URL: https://github.com/apache/kyuubi/issues/4194#issuecomment-1397031281

   Hello @Swarvenstein,
   Thanks for finding the time to report the issue!
   We really appreciate the community's efforts to improve Apache Kyuubi.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org
For additional commands, e-mail: notifications-help@kyuubi.apache.org


[GitHub] [kyuubi] Swarvenstein closed issue #4194: [Bug] Unable to adjust kyuubi driver pod idle timeout.

Posted by "Swarvenstein (via GitHub)" <gi...@apache.org>.
Swarvenstein closed issue #4194: [Bug] Unable to adjust kyuubi driver pod idle timeout.
URL: https://github.com/apache/kyuubi/issues/4194


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org
For additional commands, e-mail: notifications-help@kyuubi.apache.org


[GitHub] [kyuubi] pan3793 commented on issue #4194: [Bug] Unable to adjust kyuubi driver pod idle timeout.

Posted by GitBox <gi...@apache.org>.
pan3793 commented on issue #4194:
URL: https://github.com/apache/kyuubi/issues/4194#issuecomment-1398217596

   Hi @Swarvenstein, I notice the engine log contains the following message
   ```
   SparkTBinaryFrontendService: Received request of closing SessionHandle [0d8be737-c71d-4870-954b-831835390f09]
   ...
   SparkSQLSessionManager: SessionHandle [0d8be737-c71d-4870-954b-831835390f09] is closed, current opening sessions 0
   ...
   SparkSQLSessionManager: Session stopped due to shared level is Connection.
   ```
   It indicates that you are using the CONNECTION engine share level, which means every connection(session) will create a new spark application, and once the connection(session) is closed, the engine will self-terminate immediately to save resources.
   
   If you want to share the engine(spark application) across sessions(connections), please set `kyuubi.engine.share.level` to another options like 'USER'
   
   Ref: https://kyuubi.readthedocs.io/en/master/deployment/engine_share_level.html


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org
For additional commands, e-mail: notifications-help@kyuubi.apache.org


[GitHub] [kyuubi] Swarvenstein commented on issue #4194: [Bug] Unable to adjust kyuubi driver pod idle timeout.

Posted by "Swarvenstein (via GitHub)" <gi...@apache.org>.
Swarvenstein commented on issue #4194:
URL: https://github.com/apache/kyuubi/issues/4194#issuecomment-1399894690

   Thanks again for your help with this issue. I think it can be closed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org
For additional commands, e-mail: notifications-help@kyuubi.apache.org


[GitHub] [kyuubi] Swarvenstein commented on issue #4194: [Bug] Unable to adjust kyuubi driver pod idle timeout.

Posted by GitBox <gi...@apache.org>.
Swarvenstein commented on issue #4194:
URL: https://github.com/apache/kyuubi/issues/4194#issuecomment-1398336702

   Hello @pan3793 
   It looks like your solution with changing the connection share level works and I'm really appreciate for that.
   Just to clarify all the aspect - there is no way to extend session idle timeout with CONNECTION engine share level?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org
For additional commands, e-mail: notifications-help@kyuubi.apache.org


[GitHub] [kyuubi] pan3793 commented on issue #4194: [Bug] Unable to adjust kyuubi driver pod idle timeout.

Posted by GitBox <gi...@apache.org>.
pan3793 commented on issue #4194:
URL: https://github.com/apache/kyuubi/issues/4194#issuecomment-1398417717

   > there is no way to extend session idle timeout with CONNECTION engine share level?
   
   Yes, Kyuubi just silently ignores the `kyuubi.session.engine.idle.timeout` under CONNECTION share level.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org
For additional commands, e-mail: notifications-help@kyuubi.apache.org


[GitHub] [kyuubi] pan3793 commented on issue #4194: [Bug] Unable to adjust kyuubi driver pod idle timeout.

Posted by GitBox <gi...@apache.org>.
pan3793 commented on issue #4194:
URL: https://github.com/apache/kyuubi/issues/4194#issuecomment-1398229863

   > ...
   > Why it's needed - driver start takes about 20-30s and we want to keep it 5-10 minutes in idle state to prevent this time overhead for kyuubi users in each separate request.
   
   Sorry, I'm not a native engine speaker and am not very clear w/ your origin question, for my understanding, I guess you may worry about one of the following ones
   
   1. the `spark-submit` take a long period and kyuubi will treat the launch operation as failed when the launching time reaches the threshold. In this case, you can enlarge `kyuubi.session.engine.initialize.timeout` whose default value is `PT3M`
   
   2. the `spark-submit` is a kind of heavy operation, and you want to keep the started engine alive for a period e.g(10min) even if it does not hold the active connection so that the following connection can reuse the engine. In this case, you can refer to my first reply to learn about "engine share level"


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@kyuubi.apache.org
For additional commands, e-mail: notifications-help@kyuubi.apache.org