You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "ramakrishna chilaka (Jira)" <ji...@apache.org> on 2021/11/09 10:44:00 UTC

[jira] [Updated] (SPARK-37254) 100% CPU usage on Spark Thrift Server.

     [ https://issues.apache.org/jira/browse/SPARK-37254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

ramakrishna chilaka updated SPARK-37254:
----------------------------------------
    Description: 
We are trying to use Spark thrift server as a distributed sql query engine, the queries work when the resident memory occupied by Spark thrift server identified through HTOP is comparatively less than the driver memory. The same queries result in 100% cpu usage when the resident memory occupied by spark thrift server is greater than the configured driver memory and keeps running at 100% cpu usage. I am using incremental collect as false, as i need faster responses for exploratory queries. I am trying to understand the following points
 * Why isn't spark thrift server releasing back the memory, when there are no queries. 
 * What is causing spark thrift server to go into 100% cpu usage on all the cores, when spark thrift server's memory is greater than the driver memory (by 10% usually) and why are queries just stuck.

  was:
We are trying to use Spark thrift server as a distributed sql query engine, the queries work when the resident memory occupied by Spark thrift server identified through HTOP is comparatively less than the driver memory. The same queries result in 100% cpu usage when the resident memory occupied by spark thrift server is greater than the configured driver memory. I am using incremental collect as false, as i need faster responses for exploratory queries. I am trying to understand the following points
 * Why isn't spark thrift server releasing back the memory, when there are no queries. 
 * What is causing spark thrift server to go into 100% cpu usage on all the cores and why are queries just stuck.


> 100% CPU usage on Spark Thrift Server.
> --------------------------------------
>
>                 Key: SPARK-37254
>                 URL: https://issues.apache.org/jira/browse/SPARK-37254
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 3.1.2
>            Reporter: ramakrishna chilaka
>            Priority: Major
>
> We are trying to use Spark thrift server as a distributed sql query engine, the queries work when the resident memory occupied by Spark thrift server identified through HTOP is comparatively less than the driver memory. The same queries result in 100% cpu usage when the resident memory occupied by spark thrift server is greater than the configured driver memory and keeps running at 100% cpu usage. I am using incremental collect as false, as i need faster responses for exploratory queries. I am trying to understand the following points
>  * Why isn't spark thrift server releasing back the memory, when there are no queries. 
>  * What is causing spark thrift server to go into 100% cpu usage on all the cores, when spark thrift server's memory is greater than the driver memory (by 10% usually) and why are queries just stuck.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org