You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Harish (JIRA)" <ji...@apache.org> on 2016/10/14 16:14:20 UTC

[jira] [Created] (SPARK-17942) OpenJDK 64-Bit Server VM warning: Try increasing the code cache size using -XX:ReservedCodeCacheSize=

Harish created SPARK-17942:
------------------------------

             Summary: OpenJDK 64-Bit Server VM warning: Try increasing the code cache size using -XX:ReservedCodeCacheSize=
                 Key: SPARK-17942
                 URL: https://issues.apache.org/jira/browse/SPARK-17942
             Project: Spark
          Issue Type: Bug
          Components: PySpark
    Affects Versions: 2.0.1
            Reporter: Harish


My code snipped is  in below location. In that  snippet i had put only few columns, but in my test case i have data with 10M rows and 10,000 columns.
http://stackoverflow.com/questions/39602596/convert-groupbykey-to-reducebykey-pyspark

I see below message in spark 2.0.2 snapshot
# Stderr of the node
OpenJDK 64-Bit Server VM warning: CodeCache is full. Compiler has been disabled.
OpenJDK 64-Bit Server VM warning: Try increasing the code cache size using -XX:ReservedCodeCacheSize=

# stdout of the node
CodeCache: size=245760Kb used=242680Kb max_used=242689Kb free=3079Kb
 bounds [0x00007f32c5000000, 0x00007f32d4000000, 0x00007f32d4000000]
 total_blobs=41388 nmethods=40792 adapters=501
 compilation: disabled (not enough contiguous free space left)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org