You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tinkerpop.apache.org by "Daniel Kuppitz (JIRA)" <ji...@apache.org> on 2016/02/26 00:25:18 UTC
[jira] [Created] (TINKERPOP-1177) Improve documentation around
Spark's storage levels
Daniel Kuppitz created TINKERPOP-1177:
-----------------------------------------
Summary: Improve documentation around Spark's storage levels
Key: TINKERPOP-1177
URL: https://issues.apache.org/jira/browse/TINKERPOP-1177
Project: TinkerPop
Issue Type: Improvement
Components: documentation
Affects Versions: 3.1.1-incubating
Reporter: Daniel Kuppitz
We should add a big warning regarding the two storage level settings, especially since we have {{MEMORY_ONLY}} as the default value. What's the impact of changing any of these settings? What is kept in memory, what not? How much memory will my executors need when I use one or another storage level?
Currently these 2 settings mean nothing to the average user (you'll probably only know how to use them, if you have a deep understanding of how Spark works). For the averaage user we should probably also consider to make {{DISK_ONLY}} (or perhaps {{MEMORY_AND_DISK}}?) the default.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)