You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Thinh Nguyen (Jira)" <ji...@apache.org> on 2021/12/29 18:58:00 UTC

[jira] [Updated] (SPARK-37781) Java Out-Of-Memory Error when retrieving value from dataframe

     [ https://issues.apache.org/jira/browse/SPARK-37781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Thinh Nguyen updated SPARK-37781:
---------------------------------
    Description: 
My submitted spark application keeps running into the following error:
{code:java}
Caused by: java.lang.OutOfMemoryError: Java heap space{code}
 

A dataframe is created from a JDBC query to a Postgres database

 
{code:java}
var dataframeVariable = sparkSession.read 
                          .format("jdbc")
                                 .option("url", urlVariable)
                                 .option("driver", driverVariable)
                                 .option("user", usernameVariable)
                                 .option("password", passwordVariable)
                                 .option("query", "select max(timestamp) as timestamp from \"" + tableNameVariable + "\"")
                                 .load()
{code}
 

The error occurs when the program tries to extract a value from the dataframe. The dataframe contains only a single row and column. Here are the methods that I have used but have resulted in the application hanging and eventually getting the OOM error.
{code:java}
var lastTimestamp = dataframeVariable().getDouble(0){code}
{code:java}
var timeStampVal = dataframeVariable(col("timestamp")).collect(){code}
 

After some looking around, several people suggested changing the spark configurations for memory management to address this issues but I am not sure where to start in regards to that. Any guidance would be helpful. 

 

*Currently using:* Spark 3.1.2, Scala 2.12, Java 11

*Spark Cluster Spec:* 8 workers, 48 cores, 64GB Memory

*Application Submitted Spec:* 1 worker, 4 driver and executor cores, 4GB driver and executor memory

> Java Out-Of-Memory Error when retrieving value from dataframe
> -------------------------------------------------------------
>
>                 Key: SPARK-37781
>                 URL: https://issues.apache.org/jira/browse/SPARK-37781
>             Project: Spark
>          Issue Type: Question
>          Components: Java API, Spark Submit, SQL
>    Affects Versions: 3.1.2
>            Reporter: Thinh Nguyen
>            Priority: Blocker
>
> My submitted spark application keeps running into the following error:
> {code:java}
> Caused by: java.lang.OutOfMemoryError: Java heap space{code}
>  
> A dataframe is created from a JDBC query to a Postgres database
>  
> {code:java}
> var dataframeVariable = sparkSession.read 
>                           .format("jdbc")
>                                  .option("url", urlVariable)
>                                  .option("driver", driverVariable)
>                                  .option("user", usernameVariable)
>                                  .option("password", passwordVariable)
>                                  .option("query", "select max(timestamp) as timestamp from \"" + tableNameVariable + "\"")
>                                  .load()
> {code}
>  
> The error occurs when the program tries to extract a value from the dataframe. The dataframe contains only a single row and column. Here are the methods that I have used but have resulted in the application hanging and eventually getting the OOM error.
> {code:java}
> var lastTimestamp = dataframeVariable().getDouble(0){code}
> {code:java}
> var timeStampVal = dataframeVariable(col("timestamp")).collect(){code}
>  
> After some looking around, several people suggested changing the spark configurations for memory management to address this issues but I am not sure where to start in regards to that. Any guidance would be helpful. 
>  
> *Currently using:* Spark 3.1.2, Scala 2.12, Java 11
> *Spark Cluster Spec:* 8 workers, 48 cores, 64GB Memory
> *Application Submitted Spec:* 1 worker, 4 driver and executor cores, 4GB driver and executor memory



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org