You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by "Bode, Meikel, NMA-CFD" <Me...@Bertelsmann.de> on 2021/05/14 08:43:40 UTC
Thrift2 Server on Kubernetes?
Hi all,
We migrate to k8s and I wonder whether there are already "good practices" to run thrift2 on k8s?
Best,
Meikel
RE: Thrift2 Server on Kubernetes?
Posted by "Bode, Meikel, NMA-CFD" <Me...@Bertelsmann.de>.
Hello Kidong Lee, Hello all others π
I managed to implement your solution (https://itnext.io/hive-on-spark-in-kubernetes-115c8e9fa5c1with) using the latest 3.1.2 spark release on Ubuntu 20.4 and Microk8s 1.21.
It was a little bit tricky regarding the direct-csi, which I didn't managed to install. For testing I use hostPath. NFS was also tricky but now is running.
After adjusting the create.sh that starts the Thrift Server the driver pod comes up and also the creation of the workers is successful:
[cid:image002.png@01D77185.5ECC10E0]
From the logs I see that the workers successfully connect to the drivers.
Then the driver pot - after 30 seconds or so switches to state "Completed" and the workers get removed:
[cid:image001.png@01D77184.5EB1C790]
What I can see from the driver description:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 37m default-scheduler Successfully assigned hive-spo/spark-thrift-server-5f18bd7a75961a32-driver to ip-10-0-14-119
Warning FailedMount 37m kubelet MountVolume.SetUp failed for volume "spark-conf-volume-driver" : configmap "spark-drv-da815b7a75962422-conf-map" not found
Normal Pulling 37m kubelet Pulling image "localhost:5000/spark:v3.1.2"
Normal Pulled 37m kubelet Successfully pulled image "localhost:5000/spark:v3.1.2" in 27.481414ms
Normal Created 37m kubelet Created container spark-kubernetes-driver
Normal Started 37m kubelet Started container spark-kubernetes-driver
What I can see vom worker description is that there are warnings related to insufficient CPU resources:
[cid:image003.png@01D77185.5ECC10E0]
Thatβs strange because on the same cluster (different namespace) we run spark drivers from jupyter notebooks with more RAM and CPUs assigned for many users.
Here is the full driver log:
++ id -u
+ myuid=185
++ id -g
+ mygid=0
+ set +e
++ getent passwd 185
+ uidentry=
+ set -e
+ '[' -z '' ']'
+ '[' -w /etc/passwd ']'
+ echo '185:x:185:0:anonymous uid:/opt/spark:/bin/false'
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ sort -t_ -k4 -n
+ sed 's/[^=]*=\(.*\)/\1/g'
+ readarray -t SPARK_EXECUTOR_JAVA_OPTS
+ '[' -n '' ']'
+ '[' -z ']'
+ '[' -z ']'
+ '[' -n '' ']'
+ '[' -z ']'
+ '[' -z x ']'
+ SPARK_CLASSPATH='/opt/spark/conf::/opt/spark/jars/*'
+ case "$1" in
+ shift 1
+ CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$@")
+ exec /usr/bin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=10.1.15.44 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class io.mykidong.hive.SparkThriftServerRunner spark-internal
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/opt/spark/jars/spark-unsafe_2.12-3.1.2.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
:: loading settings :: url = jar:file:/opt/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
Ivy Default Cache set to: /tmp
The jars for the packages stored in: /tmp/jars
com.amazonaws#aws-java-sdk-s3 added as a dependency
org.apache.hadoop#hadoop-aws added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-5599eee6-ba4b-43ee-8ac6-9ab6d57b465b;1.0
confs: [default]
found com.amazonaws#aws-java-sdk-s3;1.11.375 in central
found com.amazonaws#aws-java-sdk-kms;1.11.375 in central
found com.amazonaws#aws-java-sdk-core;1.11.375 in central
found commons-logging#commons-logging;1.1.3 in central
found org.apache.httpcomponents#httpclient;4.5.5 in central
found org.apache.httpcomponents#httpcore;4.4.9 in central
found commons-codec#commons-codec;1.10 in central
found software.amazon.ion#ion-java;1.0.2 in central
found com.fasterxml.jackson.core#jackson-databind;2.6.7.1 in central
found com.fasterxml.jackson.core#jackson-annotations;2.6.0 in central
found com.fasterxml.jackson.core#jackson-core;2.6.7 in central
found com.fasterxml.jackson.dataformat#jackson-dataformat-cbor;2.6.7 in central
found joda-time#joda-time;2.8.1 in central
found com.amazonaws#jmespath-java;1.11.375 in central
found org.apache.hadoop#hadoop-aws;3.2.0 in central
found com.amazonaws#aws-java-sdk-bundle;1.11.375 in central
downloading https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-s3/1.11.375/aws-java-sdk-s3-1.11.375.jar ...
[SUCCESSFUL ] com.amazonaws#aws-java-sdk-s3;1.11.375!aws-java-sdk-s3.jar (47ms)
downloading https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-aws/3.2.0/hadoop-aws-3.2.0.jar ...
[SUCCESSFUL ] org.apache.hadoop#hadoop-aws;3.2.0!hadoop-aws.jar (28ms)
downloading https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-kms/1.11.375/aws-java-sdk-kms-1.11.375.jar ...
[SUCCESSFUL ] com.amazonaws#aws-java-sdk-kms;1.11.375!aws-java-sdk-kms.jar (17ms)
downloading https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-core/1.11.375/aws-java-sdk-core-1.11.375.jar ...
[SUCCESSFUL ] com.amazonaws#aws-java-sdk-core;1.11.375!aws-java-sdk-core.jar (35ms)
downloading https://repo1.maven.org/maven2/com/amazonaws/jmespath-java/1.11.375/jmespath-java-1.11.375.jar ...
[SUCCESSFUL ] com.amazonaws#jmespath-java;1.11.375!jmespath-java.jar (6ms)
downloading https://repo1.maven.org/maven2/commons-logging/commons-logging/1.1.3/commons-logging-1.1.3.jar ...
[SUCCESSFUL ] commons-logging#commons-logging;1.1.3!commons-logging.jar (5ms)
downloading https://repo1.maven.org/maven2/org/apache/httpcomponents/httpclient/4.5.5/httpclient-4.5.5.jar ...
[SUCCESSFUL ] org.apache.httpcomponents#httpclient;4.5.5!httpclient.jar (24ms)
downloading https://repo1.maven.org/maven2/software/amazon/ion/ion-java/1.0.2/ion-java-1.0.2.jar ...
[SUCCESSFUL ] software.amazon.ion#ion-java;1.0.2!ion-java.jar(bundle) (18ms)
downloading https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-databind/2.6.7.1/jackson-databind-2.6.7.1.jar ...
[SUCCESSFUL ] com.fasterxml.jackson.core#jackson-databind;2.6.7.1!jackson-databind.jar(bundle) (33ms)
downloading https://repo1.maven.org/maven2/com/fasterxml/jackson/dataformat/jackson-dataformat-cbor/2.6.7/jackson-dataformat-cbor-2.6.7.jar ...
[SUCCESSFUL ] com.fasterxml.jackson.dataformat#jackson-dataformat-cbor;2.6.7!jackson-dataformat-cbor.jar(bundle) (4ms)
downloading https://repo1.maven.org/maven2/joda-time/joda-time/2.8.1/joda-time-2.8.1.jar ...
[SUCCESSFUL ] joda-time#joda-time;2.8.1!joda-time.jar (20ms)
downloading https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcore/4.4.9/httpcore-4.4.9.jar ...
[SUCCESSFUL ] org.apache.httpcomponents#httpcore;4.4.9!httpcore.jar (12ms)
downloading https://repo1.maven.org/maven2/commons-codec/commons-codec/1.10/commons-codec-1.10.jar ...
[SUCCESSFUL ] commons-codec#commons-codec;1.10!commons-codec.jar (10ms)
downloading https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-annotations/2.6.0/jackson-annotations-2.6.0.jar ...
[SUCCESSFUL ] com.fasterxml.jackson.core#jackson-annotations;2.6.0!jackson-annotations.jar(bundle) (3ms)
downloading https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-core/2.6.7/jackson-core-2.6.7.jar ...
[SUCCESSFUL ] com.fasterxml.jackson.core#jackson-core;2.6.7!jackson-core.jar(bundle) (10ms)
downloading https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-bundle/1.11.375/aws-java-sdk-bundle-1.11.375.jar ...
[SUCCESSFUL ] com.amazonaws#aws-java-sdk-bundle;1.11.375!aws-java-sdk-bundle.jar (634ms)
:: resolution report :: resolve 11779ms :: artifacts dl 915ms
:: modules in use:
com.amazonaws#aws-java-sdk-bundle;1.11.375 from central in [default]
com.amazonaws#aws-java-sdk-core;1.11.375 from central in [default]
com.amazonaws#aws-java-sdk-kms;1.11.375 from central in [default]
com.amazonaws#aws-java-sdk-s3;1.11.375 from central in [default]
com.amazonaws#jmespath-java;1.11.375 from central in [default]
com.fasterxml.jackson.core#jackson-annotations;2.6.0 from central in [default]
com.fasterxml.jackson.core#jackson-core;2.6.7 from central in [default]
com.fasterxml.jackson.core#jackson-databind;2.6.7.1 from central in [default]
com.fasterxml.jackson.dataformat#jackson-dataformat-cbor;2.6.7 from central in [default]
commons-codec#commons-codec;1.10 from central in [default]
commons-logging#commons-logging;1.1.3 from central in [default]
joda-time#joda-time;2.8.1 from central in [default]
org.apache.hadoop#hadoop-aws;3.2.0 from central in [default]
org.apache.httpcomponents#httpclient;4.5.5 from central in [default]
org.apache.httpcomponents#httpcore;4.4.9 from central in [default]
software.amazon.ion#ion-java;1.0.2 from central in [default]
:: evicted modules:
commons-logging#commons-logging;1.2 by [commons-logging#commons-logging;1.1.3] in [default]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 17 | 16 | 16 | 1 || 16 | 16 |
---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent-5599eee6-ba4b-43ee-8ac6-9ab6d57b465b
confs: [default]
16 artifacts copied, 0 already retrieved (103089kB/114ms)
21/07/05 07:33:48 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
21/07/05 07:33:48 WARN MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
21/07/05 07:33:50 INFO HiveThriftServer2: Started daemon with process name: 13@spark-thrift-server-5f18bd7a75961a32-driver
21/07/05 07:33:50 INFO SignalUtils: Registering signal handler for TERM
21/07/05 07:33:50 INFO SignalUtils: Registering signal handler for HUP
21/07/05 07:33:50 INFO SignalUtils: Registering signal handler for INT
21/07/05 07:33:50 INFO HiveThriftServer2: Starting SparkContext
21/07/05 07:33:50 INFO HiveConf: Found configuration file null
21/07/05 07:33:50 INFO SparkContext: Running Spark version 3.1.2
21/07/05 07:33:50 INFO ResourceUtils: ==============================================================
21/07/05 07:33:50 INFO ResourceUtils: No custom resources configured for spark.driver.
21/07/05 07:33:50 INFO ResourceUtils: ==============================================================
21/07/05 07:33:50 INFO SparkContext: Submitted application: spark-thrift-server
21/07/05 07:33:50 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 2048, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
21/07/05 07:33:50 INFO ResourceProfile: Limiting resource is cpus at 1 tasks per executor
21/07/05 07:33:50 INFO ResourceProfileManager: Added ResourceProfile id: 0
21/07/05 07:33:50 INFO SecurityManager: Changing view acls to: 185,ubuntu
21/07/05 07:33:50 INFO SecurityManager: Changing modify acls to: 185,ubuntu
21/07/05 07:33:50 INFO SecurityManager: Changing view acls groups to:
21/07/05 07:33:50 INFO SecurityManager: Changing modify acls groups to:
21/07/05 07:33:50 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(185, ubuntu); groups with view permissions: Set(); users with modify permissions: Set(185, ubuntu); groups with modify permissions: Set()
21/07/05 07:33:50 INFO Utils: Successfully started service 'sparkDriver' on port 7078.
21/07/05 07:33:50 INFO SparkEnv: Registering MapOutputTracker
21/07/05 07:33:50 INFO SparkEnv: Registering BlockManagerMaster
21/07/05 07:33:50 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
21/07/05 07:33:50 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
21/07/05 07:33:50 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
21/07/05 07:33:50 INFO DiskBlockManager: Created local directory at /localdir/blockmgr-3c5f5741-1669-44c4-8982-0e54e714ce23
21/07/05 07:33:50 INFO MemoryStore: MemoryStore started with capacity 413.9 MiB
21/07/05 07:33:50 INFO SparkEnv: Registering OutputCommitCoordinator
21/07/05 07:33:51 INFO Utils: Successfully started service 'SparkUI' on port 4040.
21/07/05 07:33:51 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://spark-thrift-server-5f18bd7a75961a32-driver-svc.hive-spo.svc:4040
21/07/05 07:33:51 INFO SparkContext: Added JAR file:/tmp/spark-2b1fd9d7-7d38-4dbb-9f9f-aa9c8ac746aa/spark-runner.jar at spark://spark-thrift-server-5f18bd7a75961a32-driver-svc.hive-spo.svc:7078/jars/spark-runner.jar with timestamp 1625470430281
21/07/05 07:33:51 INFO SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
21/07/05 07:33:52 INFO ExecutorPodsAllocator: Going to request 4 executors from Kubernetes for ResourceProfile Id: 0, target: 4 running: 0.
21/07/05 07:33:52 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
21/07/05 07:33:52 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 7079.
21/07/05 07:33:52 INFO NettyBlockTransferService: Server created on spark-thrift-server-5f18bd7a75961a32-driver-svc.hive-spo.svc:7079
21/07/05 07:33:52 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
21/07/05 07:33:52 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, spark-thrift-server-5f18bd7a75961a32-driver-svc.hive-spo.svc, 7079, None)
21/07/05 07:33:52 INFO BlockManagerMasterEndpoint: Registering block manager spark-thrift-server-5f18bd7a75961a32-driver-svc.hive-spo.svc:7079 with 413.9 MiB RAM, BlockManagerId(driver, spark-thrift-server-5f18bd7a75961a32-driver-svc.hive-spo.svc, 7079, None)
21/07/05 07:33:52 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, spark-thrift-server-5f18bd7a75961a32-driver-svc.hive-spo.svc, 7079, None)
21/07/05 07:33:52 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, spark-thrift-server-5f18bd7a75961a32-driver-svc.hive-spo.svc, 7079, None)
21/07/05 07:33:52 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
21/07/05 07:33:52 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
21/07/05 07:33:52 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
21/07/05 07:34:22 INFO KubernetesClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000000000(ns)
21/07/05 07:34:22 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('s3a:/xxxxx/apps/spark/warehouse').
21/07/05 07:34:22 INFO SharedState: Warehouse path is 's3a:/spo-hive/apps/spark/warehouse'.
21/07/05 07:34:23 INFO HiveUtils: Initializing HiveMetastoreConnection version 2.3.7 using Spark classes.
21/07/05 07:34:23 INFO SessionState: Created local directory: /tmp/185
21/07/05 07:34:24 INFO SessionState: Created HDFS directory: /tmp/hive/185/d2d9e737-9c2b-4dd9-85ec-63ec3534dfec
21/07/05 07:34:24 INFO SessionState: Created local directory: /tmp/185/d2d9e737-9c2b-4dd9-85ec-63ec3534dfec
21/07/05 07:34:24 INFO SessionState: Created HDFS directory: /tmp/hive/185/d2d9e737-9c2b-4dd9-85ec-63ec3534dfec/_tmp_space.db
21/07/05 07:34:24 INFO HiveClientImpl: Warehouse location for Hive client (version 2.3.7) is s3a:/spo-hive/apps/spark/warehouse
21/07/05 07:34:24 INFO metastore: Trying to connect to metastore with URI thrift://metastore:9083
21/07/05 07:34:24 INFO metastore: Opened a connection to metastore, current connections: 1
21/07/05 07:34:24 INFO metastore: Connected to metastore.
21/07/05 07:34:24 INFO HiveUtils: Initializing execution hive, version 2.3.7
21/07/05 07:34:24 INFO HiveClientImpl: Warehouse location for Hive client (version 2.3.7) is s3a:/spo-hive/apps/spark/warehouse
21/07/05 07:34:25 INFO SessionManager: Operation log root directory is created: /tmp/185/operation_logs
21/07/05 07:34:25 INFO SessionManager: HiveServer2: Background operation thread pool size: 100
21/07/05 07:34:25 INFO SessionManager: HiveServer2: Background operation thread wait queue size: 100
21/07/05 07:34:25 INFO SessionManager: HiveServer2: Background operation thread keepalive time: 10 seconds
21/07/05 07:34:25 INFO AbstractService: Service:OperationManager is inited.
21/07/05 07:34:25 INFO AbstractService: Service:SessionManager is inited.
21/07/05 07:34:25 INFO AbstractService: Service: CLIService is inited.
21/07/05 07:34:25 INFO AbstractService: Service:ThriftBinaryCLIService is inited.
21/07/05 07:34:25 INFO AbstractService: Service: HiveServer2 is inited.
21/07/05 07:34:25 INFO AbstractService: Service:OperationManager is started.
21/07/05 07:34:25 INFO AbstractService: Service:SessionManager is started.
21/07/05 07:34:25 INFO AbstractService: Service: CLIService is started.
21/07/05 07:34:25 INFO AbstractService: Service:ThriftBinaryCLIService is started.
21/07/05 07:34:25 INFO ThriftCLIService: Starting ThriftBinaryCLIService on port 10016 with 5...500 worker threads
21/07/05 07:34:25 INFO AbstractService: Service:HiveServer2 is started.
21/07/05 07:34:25 INFO HiveThriftServer2: HiveThriftServer2 started
21/07/05 07:34:25 INFO HiveServer2: Shutting down HiveServer2
21/07/05 07:34:25 INFO ThriftCLIService: Thrift server has stopped
21/07/05 07:34:25 INFO AbstractService: Service:ThriftBinaryCLIService is stopped.
21/07/05 07:34:25 INFO AbstractService: Service:OperationManager is stopped.
21/07/05 07:34:25 INFO AbstractService: Service:SessionManager is stopped.
21/07/05 07:34:25 INFO AbstractService: Service:CLIService is stopped.
21/07/05 07:34:25 INFO AbstractService: Service:HiveServer2 is stopped.
21/07/05 07:34:25 INFO SparkUI: Stopped Spark web UI at http://spark-thrift-server-5f18bd7a75961a32-driver-svc.hive-spo.svc:4040
21/07/05 07:34:25 INFO KubernetesClusterSchedulerBackend: Shutting down all executors
21/07/05 07:34:25 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asking each executor to shut down
21/07/05 07:34:25 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)
21/07/05 07:34:25 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
21/07/05 07:34:25 INFO MemoryStore: MemoryStore cleared
21/07/05 07:34:25 INFO BlockManager: BlockManager stopped
21/07/05 07:34:25 INFO BlockManagerMaster: BlockManagerMaster stopped
21/07/05 07:34:25 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
21/07/05 07:34:25 INFO SparkContext: Successfully stopped SparkContext
21/07/05 07:34:25 INFO SparkContext: SparkContext already stopped.
21/07/05 07:34:25 INFO ShutdownHookManager: Shutdown hook called
21/07/05 07:34:25 INFO ShutdownHookManager: Deleting directory /localdir/spark-b13275f3-1f65-4aab-ab54-b161c012a71c
21/07/05 07:34:25 INFO ShutdownHookManager: Deleting directory /tmp/spark-b3b5e6d0-32a0-4723-a9fa-09b6756e4df5
21/07/05 07:34:25 INFO ShutdownHookManager: Deleting directory /tmp/spark-2b1fd9d7-7d38-4dbb-9f9f-aa9c8ac746aa
21/07/05 07:34:25 INFO FileSystem: Ignoring failure to deleteOnExit for path /tmp/hive/185/d2d9e737-9c2b-4dd9-85ec-63ec3534dfec
21/07/05 07:34:25 INFO MetricsSystemImpl: Stopping s3a-file-system metrics system...
21/07/05 07:34:25 INFO MetricsSystemImpl: s3a-file-system metrics system stopped.
21/07/05 07:34:25 INFO MetricsSystemImpl: s3a-file-system metrics system shutdown complete.
Any idea where to continue my search?
Best and many thanks
Meikel
-----Original Message-----
From: Bode, Meikel, NMA-CFD <Me...@Bertelsmann.de>
Sent: Sonntag, 16. Mai 2021 17:47
To: mykidong <my...@gmail.com>; user@spark.apache.org
Subject: RE: Thrift2 Server on Kubernetes?
Hi Kidong Lee,
Thank you for your email. Actually I came along your blog and it seems to be very complete.
As you write, that its not easy to bring Spark Thrift2 to K8S and because you had to write your own wrapper, I have the impression that is not really officially supported, despite the fact that it works π
My question aims more on official support as with Spark 3.1.1 we are now officially on K8S.
Do you have any info on that?
Thanks and all the best,
Meikel
PS: I will give your solution a try anyway π
-----Original Message-----
From: mykidong <my...@gmail.com>>
Sent: Freitag, 14. Mai 2021 14:12
To: user@spark.apache.org<ma...@spark.apache.org>
Subject: Re: Thrift2 Server on Kubernetes?
Hi Meikel,
If you want to run Spark Thrift Server on Kubernetes, take a look at my blog
post: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fitnext.io%2Fhive-on-spark-in-kubernetes-115c8e9fa5c1&data=04%7C01%7C%7C97c01be849724634d78908d91881d1e6%7C1ca8bd943c974fc68955bad266b43f0b%7C0%7C0%7C637567768673172728%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=3xoks2o0mIaRgZvaqkt4oPnXl0WONu8FN1IoCFr82GY%3D&reserved=0
Cheers,
- Kidong Lee.
--
Sent from: https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fapache-spark-user-list.1001560.n3.nabble.com%2F&data=04%7C01%7C%7C97c01be849724634d78908d91881d1e6%7C1ca8bd943c974fc68955bad266b43f0b%7C0%7C0%7C637567768673172728%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=bSX1ABMxAO5Y4H8YUeGtbQeSfLyQnWxej9ylvnY9wDA%3D&reserved=0
---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org<ma...@spark.apache.org>
RE: Thrift2 Server on Kubernetes?
Posted by "Bode, Meikel, NMA-CFD" <Me...@Bertelsmann.de>.
Hi Kidong Lee,
Thank you for your email. Actually I came along your blog and it seems to be very complete.
As you write, that its not easy to bring Spark Thrift2 to K8S and because you had to write your own wrapper, I have the impression that is not really officially supported, despite the fact that it works π
My question aims more on official support as with Spark 3.1.1 we are now officially on K8S.
Do you have any info on that?
Thanks and all the best,
Meikel
PS: I will give your solution a try anyway π
-----Original Message-----
From: mykidong <my...@gmail.com>
Sent: Freitag, 14. Mai 2021 14:12
To: user@spark.apache.org
Subject: Re: Thrift2 Server on Kubernetes?
Hi Meikel,
If you want to run Spark Thrift Server on Kubernetes, take a look at my blog
post: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fitnext.io%2Fhive-on-spark-in-kubernetes-115c8e9fa5c1&data=04%7C01%7C%7C03424ca170bb4da1c3c908d916d184e1%7C1ca8bd943c974fc68955bad266b43f0b%7C0%7C0%7C637565911984441872%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=CdkMZSbbQa6Pbr1eU5fkxNebHMDQPB27BSzr7w%2F8unU%3D&reserved=0
Cheers,
- Kidong Lee.
--
Sent from: https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fapache-spark-user-list.1001560.n3.nabble.com%2F&data=04%7C01%7C%7C03424ca170bb4da1c3c908d916d184e1%7C1ca8bd943c974fc68955bad266b43f0b%7C0%7C0%7C637565911984441872%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=ICU36nDl8ID9WHeM1dBCKS1f%2FOUgodlENaUH2uBe5cs%3D&reserved=0
---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org
Re: Thrift2 Server on Kubernetes?
Posted by mykidong <my...@gmail.com>.
Hi Meikel,
If you want to run Spark Thrift Server on Kubernetes, take a look at my blog
post: https://itnext.io/hive-on-spark-in-kubernetes-115c8e9fa5c1
Cheers,
- Kidong Lee.
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org