You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hawq.apache.org by Gregory Chase <gc...@pivotal.io> on 2015/11/04 00:20:53 UTC

Using Spark with HAWQ pulling from large table - needs answer on StackOverflow

Greetings HAWQ community,
There's an unanswered question about using Spark with HAWQ to analyze a
large table.

I realize this is more of a Spark question than a HAWQ question, but it is
the same user.

If someone has an idea, please offer an answer:
http://stackoverflow.com/questions/33004441/setting-spark-memory-allocations-for-extracting-125-gb-of-data-executorlostfai

-Greg

-- 
Greg Chase

Director of Big Data Communities
http://www.pivotal.io/big-data

Pivotal Software
http://www.pivotal.io/

650-215-0477
@GregChase
Blog: http://geekmarketing.biz/