You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ben McCann (JIRA)" <ji...@apache.org> on 2016/07/22 21:55:20 UTC
[jira] [Created] (SPARK-16688) OpenHashSet.MAX_CAPACITY is always
based on Int even when using Long
Ben McCann created SPARK-16688:
----------------------------------
Summary: OpenHashSet.MAX_CAPACITY is always based on Int even when using Long
Key: SPARK-16688
URL: https://issues.apache.org/jira/browse/SPARK-16688
Project: Spark
Issue Type: Bug
Components: Spark Core
Affects Versions: 1.6.2, 2.0.0
Reporter: Ben McCann
MAX_CAPACITY is hardcoded to a value of 1073741824:
{code}val MAX_CAPACITY = 1 << 30
class LongHasher extends Hasher[Long] {
override def hash(o: Long): Int = (o ^ (o >>> 32)).toInt
}{code}
I'd like to stick more than 1B items in my hashmap. Spark's all about big data, right?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org