You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Sunil Tripathy <su...@gmail.com> on 2016/09/09 22:42:04 UTC

Spark Memory Allocation Exception

Hi,
  I am using spark 1.6 to load a history activity dataset for last 3/4
years and write that to a parquet file partitioned by day. I am using the
following exception when the insert command is running to insert the data
onto the parquet partitions.

org.apache.hadoop.hive.ql.metadata.HiveException:
parquet.hadoop.MemoryManager$1: New Memory allocation 1047552 bytes is
smaller than the minimum allocation size of 1048576 bytes.

The input data size is around 350 GB and we have around 145 nodes with
384GB on each node.
Any pointers to resolve the issue will be appreciated.

Thanks