You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by Sergio Pena <se...@cloudera.com> on 2015/03/06 18:18:20 UTC

Review Request 31800: HIVE-9658 Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31800/
-----------------------------------------------------------

Review request for hive, Ryan Blue and cheng xu.


Bugs: HIVE-9658
    https://issues.apache.org/jira/browse/HIVE-9658


Repository: hive-git


Description
-------

This patch bypasses primitive java objects to hive object inspectors without using primitive Writable objects.
It helps to reduce memory usage.

I did not bypass other complex objects, such as binaries, decimal and date/timestamp, because their Writable objects are needed in other parts of the code,
and creating them later takes more ops/s to do it. Better save time at the beginning.


Diffs
-----

  itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java 4f6985cd13017ce37f4f0c100b16a27aa5b02f8b 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java c915f728fc9b27da0fabefab5d8f5faa53640b78 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetInputFormat.java 0391229723cc3ecef551fa44b8456b0d2ac93fb5 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/VectorizedParquetInputFormat.java d7edd52614771857d1b21971a66894841c248ef9 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ConverterParent.java 6ff6b473c9f1867bc14bb597094ddb92487cc954 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java a43661eb54ba29692c07c264584b5aecf648ef99 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 377e362979156b8d52d103192b22bd7f19fa683b 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveCollectionConverter.java f1c8b6f13718b37f590263e5b35ed6c327f5cf4f 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java c6d03a19029d5bcc86b998dd7a8609973648c103 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveStructConverter.java f95d15eddc21bc432fa53572de5756751a13341a 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/Repeated.java ee57b31dac53d99af0c5a520f51102796ca32fd3 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java 47cd68200d3be9260aa35385d0dade74d7dc215d 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java 6dc85faecabd59dfc616e908926c1f6b6db372de 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java 49bf1c5325833993f4c09efdf1546af560783c28 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java bb066afd38aea6b2eb119b0f8ec8d00af57dc187 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/DeepParquetHiveMapInspector.java 143d72e76502d4877e8208181d9743259051dcea 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java bde0dcbb3978ba47b15ae2c9bbe2f87ed3984ab1 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 7fd5e9612d4e3c9bf3b816bc48dbdbe59fb8a5a8 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/StandardParquetHiveMapInspector.java 22250b30a14d52907fb22d4f44b93c7633c6a89e 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetByteInspector.java 864f56292fa4856df155f546064e4a6732cc663f 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetShortInspector.java 39f265777c7e164382117e3902c3b6e491295f70 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/AbstractTestParquetDirect.java 3a476731e31bf38822f0d530f0aea2eadb675a49 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestArrayCompatibility.java d45d8eeb9e8a61f254098ab15d0305fc71152abd 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 8f03c5b403332f7b36b2271a2246a0fc90b3bfba 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapStructures.java 3c7401ffbe88ce66b96f9cceab4e9c3d6267f8fe 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetInputFormat.java 1a54bf5797efd5859c9e665bcc7134168e5d193f 
  serde/src/java/org/apache/hadoop/hive/serde2/io/HiveArrayWritable.java PRE-CREATION 

Diff: https://reviews.apache.org/r/31800/diff/


Testing
-------

Some performance tests were done to validate this.

Schema: int,double,boolean,string,array<int>,map<string,string>,struct<a:int,b:int>
  
- JMH (Microbenchmarks) calls on parquet reads.
  
  Before: 579 ops/s
  After:  651 ops/s

- YourKit Java Profiler to measure memory objects recorded.
  Reading 20,000 random rows (10 times)
  
  Before:
     Objects recorded:   1,863,610
     Objects size:       42,373,808
     Total memory usage: 29%
     
  After:
     Objects recorded:   1,596,804
     Objects size:       34,192,832
     Total memory usage: 24%

All tests were run multiple times to get same results.


Thanks,

Sergio Pena


Re: Review Request 31800: HIVE-9658 Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

Posted by Sergio Pena <se...@cloudera.com>.

> On March 20, 2015, 5:47 a.m., cheng xu wrote:
> > Hi Sergio, thank you for your update. Just few more minor suggestions.

Thanks Ferd for your comments.


> On March 20, 2015, 5:47 a.m., cheng xu wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveCollectionConverter.java, lines 111-113
> > <https://reviews.apache.org/r/31800/diff/3/?file=890404#file890404line111>
> >
> >     How about moving this code block to the top of the class definition?
> >     	Object objs[] = new Object[] { null, null };

It cannot be moved. 
The start() method is called internally by parquet everytime a new record is read from the file. This makes sure that the array is null before calling the set() method.
Also, the KeyValueConverter class is instatiated only once.


> On March 20, 2015, 5:47 a.m., cheng xu wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java, line 534
> > <https://reviews.apache.org/r/31800/diff/3/?file=890398#file890398line534>
> >
> >     Is that possible to pass down the switch block into buildObjectAssignMethod? In current implement, we already do this kind of thing in that method.

I don't think is possible. I tried thinking of a different approach, but I can't find one.

The buildObjectAssignMethod() method needs to know what primitive category is passed so that it can used a VectorLongColumnAssign or VectorDoubleColumnAssign for instance. Then each of the Vector... classes have the assignObjectValue() method overridden where the value is passed, and assignLong() or assignDouble() is used.

I'll keep an eye on this class to see if there are some improvements I can do.


- Sergio


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31800/#review77168
-----------------------------------------------------------


On March 10, 2015, 6:02 p.m., Sergio Pena wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31800/
> -----------------------------------------------------------
> 
> (Updated March 10, 2015, 6:02 p.m.)
> 
> 
> Review request for hive, Ryan Blue and cheng xu.
> 
> 
> Bugs: HIVE-9658
>     https://issues.apache.org/jira/browse/HIVE-9658
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> This patch bypasses primitive java objects to hive object inspectors without using primitive Writable objects.
> It helps to reduce memory usage.
> 
> I did not bypass other complex objects, such as binaries, decimal and date/timestamp, because their Writable objects are needed in other parts of the code,
> and creating them later takes more ops/s to do it. Better save time at the beginning.
> 
> 
> Diffs
> -----
> 
>   itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java 4f6985cd13017ce37f4f0c100b16a27aa5b02f8b 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java c915f728fc9b27da0fabefab5d8f5faa53640b78 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetInputFormat.java 0391229723cc3ecef551fa44b8456b0d2ac93fb5 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/VectorizedParquetInputFormat.java d7edd52614771857d1b21971a66894841c248ef9 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ConverterParent.java 6ff6b473c9f1867bc14bb597094ddb92487cc954 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java a43661eb54ba29692c07c264584b5aecf648ef99 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 3fc012970e23bbc188ce2a2e2ba0b04bc6f22317 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveCollectionConverter.java f1c8b6f13718b37f590263e5b35ed6c327f5cf4f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java c6d03a19029d5bcc86b998dd7a8609973648c103 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveStructConverter.java f95d15eddc21bc432fa53572de5756751a13341a 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/Repeated.java ee57b31dac53d99af0c5a520f51102796ca32fd3 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java 57ae7a9740d55b407cadfc8bc030593b29f90700 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java a26199612cf338e336f210f29acb0398c536e1f9 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java.orig PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java 49bf1c5325833993f4c09efdf1546af560783c28 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java 609188206f88e296d893b84bcaaab53f974e6b7d 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/DeepParquetHiveMapInspector.java 143d72e76502d4877e8208181d9743259051dcea 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ObjectArrayWritableObjectInspector.java PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java bde0dcbb3978ba47b15ae2c9bbe2f87ed3984ab1 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 7fd5e9612d4e3c9bf3b816bc48dbdbe59fb8a5a8 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/StandardParquetHiveMapInspector.java 22250b30a14d52907fb22d4f44b93c7633c6a89e 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetByteInspector.java 864f56292fa4856df155f546064e4a6732cc663f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetShortInspector.java 39f265777c7e164382117e3902c3b6e491295f70 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/AbstractTestParquetDirect.java 3a476731e31bf38822f0d530f0aea2eadb675a49 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestArrayCompatibility.java d45d8eeb9e8a61f254098ab15d0305fc71152abd 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 8f03c5b403332f7b36b2271a2246a0fc90b3bfba 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapStructures.java 3c7401ffbe88ce66b96f9cceab4e9c3d6267f8fe 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetInputFormat.java 1a54bf5797efd5859c9e665bcc7134168e5d193f 
>   serde/src/java/org/apache/hadoop/hive/serde2/io/ObjectArrayWritable.java PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/31800/diff/
> 
> 
> Testing
> -------
> 
> Some performance tests were done to validate this.
> 
> Schema: int,double,boolean,string,array<int>,map<string,string>,struct<a:int,b:int>
>   
> - JMH (Microbenchmarks) calls on parquet reads.
>   
>   Before: 579 ops/s
>   After:  651 ops/s
> 
> - YourKit Java Profiler to measure memory objects recorded.
>   Reading 20,000 random rows (10 times)
>   
>   Before:
>      Objects recorded:   1,863,610
>      Objects size:       42,373,808
>      Total memory usage: 29%
>      
>   After:
>      Objects recorded:   1,596,804
>      Objects size:       34,192,832
>      Total memory usage: 24%
> 
> All tests were run multiple times to get same results.
> 
> 
> Thanks,
> 
> Sergio Pena
> 
>


Re: Review Request 31800: HIVE-9658 Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

Posted by cheng xu <ch...@intel.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31800/#review77168
-----------------------------------------------------------


Hi Sergio, thank you for your update. Just few more minor suggestions.


ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java
<https://reviews.apache.org/r/31800/#comment125044>

    Is that possible to pass down the switch block into buildObjectAssignMethod? In current implement, we already do this kind of thing in that method.



ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveCollectionConverter.java
<https://reviews.apache.org/r/31800/#comment125045>

    How about moving this code block to the top of the class definition?
    	Object objs[] = new Object[] { null, null };



ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveStructConverter.java
<https://reviews.apache.org/r/31800/#comment125046>

    Arrrays.fill(elements,null)



ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java.orig
<https://reviews.apache.org/r/31800/#comment125047>

    Please remove this file. It shouldn't be committed into trunk.



ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java
<https://reviews.apache.org/r/31800/#comment125048>

    Please only import needed packages.


- cheng xu


On March 10, 2015, 6:02 p.m., Sergio Pena wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31800/
> -----------------------------------------------------------
> 
> (Updated March 10, 2015, 6:02 p.m.)
> 
> 
> Review request for hive, Ryan Blue and cheng xu.
> 
> 
> Bugs: HIVE-9658
>     https://issues.apache.org/jira/browse/HIVE-9658
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> This patch bypasses primitive java objects to hive object inspectors without using primitive Writable objects.
> It helps to reduce memory usage.
> 
> I did not bypass other complex objects, such as binaries, decimal and date/timestamp, because their Writable objects are needed in other parts of the code,
> and creating them later takes more ops/s to do it. Better save time at the beginning.
> 
> 
> Diffs
> -----
> 
>   itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java 4f6985cd13017ce37f4f0c100b16a27aa5b02f8b 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java c915f728fc9b27da0fabefab5d8f5faa53640b78 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetInputFormat.java 0391229723cc3ecef551fa44b8456b0d2ac93fb5 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/VectorizedParquetInputFormat.java d7edd52614771857d1b21971a66894841c248ef9 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ConverterParent.java 6ff6b473c9f1867bc14bb597094ddb92487cc954 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java a43661eb54ba29692c07c264584b5aecf648ef99 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 3fc012970e23bbc188ce2a2e2ba0b04bc6f22317 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveCollectionConverter.java f1c8b6f13718b37f590263e5b35ed6c327f5cf4f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java c6d03a19029d5bcc86b998dd7a8609973648c103 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveStructConverter.java f95d15eddc21bc432fa53572de5756751a13341a 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/Repeated.java ee57b31dac53d99af0c5a520f51102796ca32fd3 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java 57ae7a9740d55b407cadfc8bc030593b29f90700 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java a26199612cf338e336f210f29acb0398c536e1f9 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java.orig PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java 49bf1c5325833993f4c09efdf1546af560783c28 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java 609188206f88e296d893b84bcaaab53f974e6b7d 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/DeepParquetHiveMapInspector.java 143d72e76502d4877e8208181d9743259051dcea 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ObjectArrayWritableObjectInspector.java PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java bde0dcbb3978ba47b15ae2c9bbe2f87ed3984ab1 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 7fd5e9612d4e3c9bf3b816bc48dbdbe59fb8a5a8 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/StandardParquetHiveMapInspector.java 22250b30a14d52907fb22d4f44b93c7633c6a89e 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetByteInspector.java 864f56292fa4856df155f546064e4a6732cc663f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetShortInspector.java 39f265777c7e164382117e3902c3b6e491295f70 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/AbstractTestParquetDirect.java 3a476731e31bf38822f0d530f0aea2eadb675a49 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestArrayCompatibility.java d45d8eeb9e8a61f254098ab15d0305fc71152abd 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 8f03c5b403332f7b36b2271a2246a0fc90b3bfba 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapStructures.java 3c7401ffbe88ce66b96f9cceab4e9c3d6267f8fe 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetInputFormat.java 1a54bf5797efd5859c9e665bcc7134168e5d193f 
>   serde/src/java/org/apache/hadoop/hive/serde2/io/ObjectArrayWritable.java PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/31800/diff/
> 
> 
> Testing
> -------
> 
> Some performance tests were done to validate this.
> 
> Schema: int,double,boolean,string,array<int>,map<string,string>,struct<a:int,b:int>
>   
> - JMH (Microbenchmarks) calls on parquet reads.
>   
>   Before: 579 ops/s
>   After:  651 ops/s
> 
> - YourKit Java Profiler to measure memory objects recorded.
>   Reading 20,000 random rows (10 times)
>   
>   Before:
>      Objects recorded:   1,863,610
>      Objects size:       42,373,808
>      Total memory usage: 29%
>      
>   After:
>      Objects recorded:   1,596,804
>      Objects size:       34,192,832
>      Total memory usage: 24%
> 
> All tests were run multiple times to get same results.
> 
> 
> Thanks,
> 
> Sergio Pena
> 
>


Re: Review Request 31800: HIVE-9658 Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

Posted by cheng xu <ch...@intel.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31800/#review77362
-----------------------------------------------------------

Ship it!


Ship It!

- cheng xu


On March 20, 2015, 3:59 p.m., Sergio Pena wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31800/
> -----------------------------------------------------------
> 
> (Updated March 20, 2015, 3:59 p.m.)
> 
> 
> Review request for hive, Ryan Blue and cheng xu.
> 
> 
> Bugs: HIVE-9658
>     https://issues.apache.org/jira/browse/HIVE-9658
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> This patch bypasses primitive java objects to hive object inspectors without using primitive Writable objects.
> It helps to reduce memory usage.
> 
> I did not bypass other complex objects, such as binaries, decimal and date/timestamp, because their Writable objects are needed in other parts of the code,
> and creating them later takes more ops/s to do it. Better save time at the beginning.
> 
> 
> Diffs
> -----
> 
>   itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java 4f6985cd13017ce37f4f0c100b16a27aa5b02f8b 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java c915f728fc9b27da0fabefab5d8f5faa53640b78 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetInputFormat.java 0391229723cc3ecef551fa44b8456b0d2ac93fb5 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/VectorizedParquetInputFormat.java d7edd52614771857d1b21971a66894841c248ef9 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ConverterParent.java 6ff6b473c9f1867bc14bb597094ddb92487cc954 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java a43661eb54ba29692c07c264584b5aecf648ef99 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 3fc012970e23bbc188ce2a2e2ba0b04bc6f22317 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveCollectionConverter.java f1c8b6f13718b37f590263e5b35ed6c327f5cf4f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java c6d03a19029d5bcc86b998dd7a8609973648c103 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveStructConverter.java f95d15eddc21bc432fa53572de5756751a13341a 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/Repeated.java ee57b31dac53d99af0c5a520f51102796ca32fd3 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java 57ae7a9740d55b407cadfc8bc030593b29f90700 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java a26199612cf338e336f210f29acb0398c536e1f9 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java 49bf1c5325833993f4c09efdf1546af560783c28 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java 609188206f88e296d893b84bcaaab53f974e6b7d 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/DeepParquetHiveMapInspector.java 143d72e76502d4877e8208181d9743259051dcea 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ObjectArrayWritableObjectInspector.java PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java bde0dcbb3978ba47b15ae2c9bbe2f87ed3984ab1 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 7fd5e9612d4e3c9bf3b816bc48dbdbe59fb8a5a8 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/StandardParquetHiveMapInspector.java 22250b30a14d52907fb22d4f44b93c7633c6a89e 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetByteInspector.java 864f56292fa4856df155f546064e4a6732cc663f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetShortInspector.java 39f265777c7e164382117e3902c3b6e491295f70 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/AbstractTestParquetDirect.java 3a476731e31bf38822f0d530f0aea2eadb675a49 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestArrayCompatibility.java d45d8eeb9e8a61f254098ab15d0305fc71152abd 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 8f03c5b403332f7b36b2271a2246a0fc90b3bfba 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapStructures.java 3c7401ffbe88ce66b96f9cceab4e9c3d6267f8fe 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetInputFormat.java 1a54bf5797efd5859c9e665bcc7134168e5d193f 
>   serde/src/java/org/apache/hadoop/hive/serde2/io/ObjectArrayWritable.java PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/31800/diff/
> 
> 
> Testing
> -------
> 
> Some performance tests were done to validate this.
> 
> Schema: int,double,boolean,string,array<int>,map<string,string>,struct<a:int,b:int>
>   
> - JMH (Microbenchmarks) calls on parquet reads.
>   
>   Before: 579 ops/s
>   After:  651 ops/s
> 
> - YourKit Java Profiler to measure memory objects recorded.
>   Reading 20,000 random rows (10 times)
>   
>   Before:
>      Objects recorded:   1,863,610
>      Objects size:       42,373,808
>      Total memory usage: 29%
>      
>   After:
>      Objects recorded:   1,596,804
>      Objects size:       34,192,832
>      Total memory usage: 24%
> 
> All tests were run multiple times to get same results.
> 
> 
> Thanks,
> 
> Sergio Pena
> 
>


Re: Review Request 31800: HIVE-9658 Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

Posted by Sergio Pena <se...@cloudera.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31800/
-----------------------------------------------------------

(Updated March 20, 2015, 3:59 p.m.)


Review request for hive, Ryan Blue and cheng xu.


Changes
-------

New patch with changes.


Bugs: HIVE-9658
    https://issues.apache.org/jira/browse/HIVE-9658


Repository: hive-git


Description
-------

This patch bypasses primitive java objects to hive object inspectors without using primitive Writable objects.
It helps to reduce memory usage.

I did not bypass other complex objects, such as binaries, decimal and date/timestamp, because their Writable objects are needed in other parts of the code,
and creating them later takes more ops/s to do it. Better save time at the beginning.


Diffs (updated)
-----

  itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java 4f6985cd13017ce37f4f0c100b16a27aa5b02f8b 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java c915f728fc9b27da0fabefab5d8f5faa53640b78 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetInputFormat.java 0391229723cc3ecef551fa44b8456b0d2ac93fb5 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/VectorizedParquetInputFormat.java d7edd52614771857d1b21971a66894841c248ef9 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ConverterParent.java 6ff6b473c9f1867bc14bb597094ddb92487cc954 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java a43661eb54ba29692c07c264584b5aecf648ef99 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 3fc012970e23bbc188ce2a2e2ba0b04bc6f22317 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveCollectionConverter.java f1c8b6f13718b37f590263e5b35ed6c327f5cf4f 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java c6d03a19029d5bcc86b998dd7a8609973648c103 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveStructConverter.java f95d15eddc21bc432fa53572de5756751a13341a 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/Repeated.java ee57b31dac53d99af0c5a520f51102796ca32fd3 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java 57ae7a9740d55b407cadfc8bc030593b29f90700 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java a26199612cf338e336f210f29acb0398c536e1f9 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java 49bf1c5325833993f4c09efdf1546af560783c28 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java 609188206f88e296d893b84bcaaab53f974e6b7d 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/DeepParquetHiveMapInspector.java 143d72e76502d4877e8208181d9743259051dcea 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ObjectArrayWritableObjectInspector.java PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java bde0dcbb3978ba47b15ae2c9bbe2f87ed3984ab1 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 7fd5e9612d4e3c9bf3b816bc48dbdbe59fb8a5a8 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/StandardParquetHiveMapInspector.java 22250b30a14d52907fb22d4f44b93c7633c6a89e 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetByteInspector.java 864f56292fa4856df155f546064e4a6732cc663f 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetShortInspector.java 39f265777c7e164382117e3902c3b6e491295f70 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/AbstractTestParquetDirect.java 3a476731e31bf38822f0d530f0aea2eadb675a49 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestArrayCompatibility.java d45d8eeb9e8a61f254098ab15d0305fc71152abd 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 8f03c5b403332f7b36b2271a2246a0fc90b3bfba 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapStructures.java 3c7401ffbe88ce66b96f9cceab4e9c3d6267f8fe 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetInputFormat.java 1a54bf5797efd5859c9e665bcc7134168e5d193f 
  serde/src/java/org/apache/hadoop/hive/serde2/io/ObjectArrayWritable.java PRE-CREATION 

Diff: https://reviews.apache.org/r/31800/diff/


Testing
-------

Some performance tests were done to validate this.

Schema: int,double,boolean,string,array<int>,map<string,string>,struct<a:int,b:int>
  
- JMH (Microbenchmarks) calls on parquet reads.
  
  Before: 579 ops/s
  After:  651 ops/s

- YourKit Java Profiler to measure memory objects recorded.
  Reading 20,000 random rows (10 times)
  
  Before:
     Objects recorded:   1,863,610
     Objects size:       42,373,808
     Total memory usage: 29%
     
  After:
     Objects recorded:   1,596,804
     Objects size:       34,192,832
     Total memory usage: 24%

All tests were run multiple times to get same results.


Thanks,

Sergio Pena


Re: Review Request 31800: HIVE-9658 Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

Posted by Sergio Pena <se...@cloudera.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31800/
-----------------------------------------------------------

(Updated March 10, 2015, 6:02 p.m.)


Review request for hive, Ryan Blue and cheng xu.


Changes
-------

Patch with changes due to trunk merge on parquet branch


Bugs: HIVE-9658
    https://issues.apache.org/jira/browse/HIVE-9658


Repository: hive-git


Description
-------

This patch bypasses primitive java objects to hive object inspectors without using primitive Writable objects.
It helps to reduce memory usage.

I did not bypass other complex objects, such as binaries, decimal and date/timestamp, because their Writable objects are needed in other parts of the code,
and creating them later takes more ops/s to do it. Better save time at the beginning.


Diffs (updated)
-----

  itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java 4f6985cd13017ce37f4f0c100b16a27aa5b02f8b 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java c915f728fc9b27da0fabefab5d8f5faa53640b78 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetInputFormat.java 0391229723cc3ecef551fa44b8456b0d2ac93fb5 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/VectorizedParquetInputFormat.java d7edd52614771857d1b21971a66894841c248ef9 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ConverterParent.java 6ff6b473c9f1867bc14bb597094ddb92487cc954 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java a43661eb54ba29692c07c264584b5aecf648ef99 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 3fc012970e23bbc188ce2a2e2ba0b04bc6f22317 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveCollectionConverter.java f1c8b6f13718b37f590263e5b35ed6c327f5cf4f 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java c6d03a19029d5bcc86b998dd7a8609973648c103 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveStructConverter.java f95d15eddc21bc432fa53572de5756751a13341a 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/Repeated.java ee57b31dac53d99af0c5a520f51102796ca32fd3 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java 57ae7a9740d55b407cadfc8bc030593b29f90700 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java a26199612cf338e336f210f29acb0398c536e1f9 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java.orig PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java 49bf1c5325833993f4c09efdf1546af560783c28 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java 609188206f88e296d893b84bcaaab53f974e6b7d 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/DeepParquetHiveMapInspector.java 143d72e76502d4877e8208181d9743259051dcea 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ObjectArrayWritableObjectInspector.java PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java bde0dcbb3978ba47b15ae2c9bbe2f87ed3984ab1 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 7fd5e9612d4e3c9bf3b816bc48dbdbe59fb8a5a8 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/StandardParquetHiveMapInspector.java 22250b30a14d52907fb22d4f44b93c7633c6a89e 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetByteInspector.java 864f56292fa4856df155f546064e4a6732cc663f 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetShortInspector.java 39f265777c7e164382117e3902c3b6e491295f70 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/AbstractTestParquetDirect.java 3a476731e31bf38822f0d530f0aea2eadb675a49 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestArrayCompatibility.java d45d8eeb9e8a61f254098ab15d0305fc71152abd 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 8f03c5b403332f7b36b2271a2246a0fc90b3bfba 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapStructures.java 3c7401ffbe88ce66b96f9cceab4e9c3d6267f8fe 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetInputFormat.java 1a54bf5797efd5859c9e665bcc7134168e5d193f 
  serde/src/java/org/apache/hadoop/hive/serde2/io/ObjectArrayWritable.java PRE-CREATION 

Diff: https://reviews.apache.org/r/31800/diff/


Testing
-------

Some performance tests were done to validate this.

Schema: int,double,boolean,string,array<int>,map<string,string>,struct<a:int,b:int>
  
- JMH (Microbenchmarks) calls on parquet reads.
  
  Before: 579 ops/s
  After:  651 ops/s

- YourKit Java Profiler to measure memory objects recorded.
  Reading 20,000 random rows (10 times)
  
  Before:
     Objects recorded:   1,863,610
     Objects size:       42,373,808
     Total memory usage: 29%
     
  After:
     Objects recorded:   1,596,804
     Objects size:       34,192,832
     Total memory usage: 24%

All tests were run multiple times to get same results.


Thanks,

Sergio Pena


Re: Review Request 31800: HIVE-9658 Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

Posted by Sergio Pena <se...@cloudera.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31800/
-----------------------------------------------------------

(Updated March 9, 2015, 5:32 p.m.)


Review request for hive, Ryan Blue and cheng xu.


Changes
-------

New patch that makes changes due to feedback.


Bugs: HIVE-9658
    https://issues.apache.org/jira/browse/HIVE-9658


Repository: hive-git


Description
-------

This patch bypasses primitive java objects to hive object inspectors without using primitive Writable objects.
It helps to reduce memory usage.

I did not bypass other complex objects, such as binaries, decimal and date/timestamp, because their Writable objects are needed in other parts of the code,
and creating them later takes more ops/s to do it. Better save time at the beginning.


Diffs (updated)
-----

  itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java 4f6985cd13017ce37f4f0c100b16a27aa5b02f8b 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java c915f728fc9b27da0fabefab5d8f5faa53640b78 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetInputFormat.java 0391229723cc3ecef551fa44b8456b0d2ac93fb5 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/VectorizedParquetInputFormat.java d7edd52614771857d1b21971a66894841c248ef9 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ConverterParent.java 6ff6b473c9f1867bc14bb597094ddb92487cc954 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java a43661eb54ba29692c07c264584b5aecf648ef99 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 3fc012970e23bbc188ce2a2e2ba0b04bc6f22317 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveCollectionConverter.java f1c8b6f13718b37f590263e5b35ed6c327f5cf4f 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java c6d03a19029d5bcc86b998dd7a8609973648c103 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveStructConverter.java f95d15eddc21bc432fa53572de5756751a13341a 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/Repeated.java ee57b31dac53d99af0c5a520f51102796ca32fd3 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java 57ae7a9740d55b407cadfc8bc030593b29f90700 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java f69d13cdf6801f6dcc247100eaa71f84d45b57a0 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java 49bf1c5325833993f4c09efdf1546af560783c28 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java 609188206f88e296d893b84bcaaab53f974e6b7d 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/DeepParquetHiveMapInspector.java 143d72e76502d4877e8208181d9743259051dcea 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ObjectArrayWritableObjectInspector.java PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java bde0dcbb3978ba47b15ae2c9bbe2f87ed3984ab1 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 7fd5e9612d4e3c9bf3b816bc48dbdbe59fb8a5a8 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/StandardParquetHiveMapInspector.java 22250b30a14d52907fb22d4f44b93c7633c6a89e 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetByteInspector.java 864f56292fa4856df155f546064e4a6732cc663f 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetShortInspector.java 39f265777c7e164382117e3902c3b6e491295f70 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/AbstractTestParquetDirect.java 3a476731e31bf38822f0d530f0aea2eadb675a49 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestArrayCompatibility.java d45d8eeb9e8a61f254098ab15d0305fc71152abd 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 8f03c5b403332f7b36b2271a2246a0fc90b3bfba 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapStructures.java 3c7401ffbe88ce66b96f9cceab4e9c3d6267f8fe 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetInputFormat.java 1a54bf5797efd5859c9e665bcc7134168e5d193f 
  serde/src/java/org/apache/hadoop/hive/serde2/io/ObjectArrayWritable.java PRE-CREATION 

Diff: https://reviews.apache.org/r/31800/diff/


Testing
-------

Some performance tests were done to validate this.

Schema: int,double,boolean,string,array<int>,map<string,string>,struct<a:int,b:int>
  
- JMH (Microbenchmarks) calls on parquet reads.
  
  Before: 579 ops/s
  After:  651 ops/s

- YourKit Java Profiler to measure memory objects recorded.
  Reading 20,000 random rows (10 times)
  
  Before:
     Objects recorded:   1,863,610
     Objects size:       42,373,808
     Total memory usage: 29%
     
  After:
     Objects recorded:   1,596,804
     Objects size:       34,192,832
     Total memory usage: 24%

All tests were run multiple times to get same results.


Thanks,

Sergio Pena


Re: Review Request 31800: HIVE-9658 Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

Posted by Sergio Pena <se...@cloudera.com>.

> On March 9, 2015, 6:20 a.m., Dong Chen wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java, line 53
> > <https://reviews.apache.org/r/31800/diff/1/?file=887397#file887397line53>
> >
> >     How about rename the class to "HiveArrayWritableObjectInspector", since we use the "HiveArrayWritable" here.

Thanks Dong.
I changed it to ObjectArrayWritableObjectInspector as Ferdinand suggested.


- Sergio


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31800/#review75664
-----------------------------------------------------------


On March 6, 2015, 5:18 p.m., Sergio Pena wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31800/
> -----------------------------------------------------------
> 
> (Updated March 6, 2015, 5:18 p.m.)
> 
> 
> Review request for hive, Ryan Blue and cheng xu.
> 
> 
> Bugs: HIVE-9658
>     https://issues.apache.org/jira/browse/HIVE-9658
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> This patch bypasses primitive java objects to hive object inspectors without using primitive Writable objects.
> It helps to reduce memory usage.
> 
> I did not bypass other complex objects, such as binaries, decimal and date/timestamp, because their Writable objects are needed in other parts of the code,
> and creating them later takes more ops/s to do it. Better save time at the beginning.
> 
> 
> Diffs
> -----
> 
>   itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java 4f6985cd13017ce37f4f0c100b16a27aa5b02f8b 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java c915f728fc9b27da0fabefab5d8f5faa53640b78 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetInputFormat.java 0391229723cc3ecef551fa44b8456b0d2ac93fb5 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/VectorizedParquetInputFormat.java d7edd52614771857d1b21971a66894841c248ef9 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ConverterParent.java 6ff6b473c9f1867bc14bb597094ddb92487cc954 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java a43661eb54ba29692c07c264584b5aecf648ef99 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 377e362979156b8d52d103192b22bd7f19fa683b 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveCollectionConverter.java f1c8b6f13718b37f590263e5b35ed6c327f5cf4f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java c6d03a19029d5bcc86b998dd7a8609973648c103 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveStructConverter.java f95d15eddc21bc432fa53572de5756751a13341a 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/Repeated.java ee57b31dac53d99af0c5a520f51102796ca32fd3 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java 47cd68200d3be9260aa35385d0dade74d7dc215d 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java 6dc85faecabd59dfc616e908926c1f6b6db372de 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java 49bf1c5325833993f4c09efdf1546af560783c28 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java bb066afd38aea6b2eb119b0f8ec8d00af57dc187 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/DeepParquetHiveMapInspector.java 143d72e76502d4877e8208181d9743259051dcea 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java bde0dcbb3978ba47b15ae2c9bbe2f87ed3984ab1 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 7fd5e9612d4e3c9bf3b816bc48dbdbe59fb8a5a8 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/StandardParquetHiveMapInspector.java 22250b30a14d52907fb22d4f44b93c7633c6a89e 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetByteInspector.java 864f56292fa4856df155f546064e4a6732cc663f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetShortInspector.java 39f265777c7e164382117e3902c3b6e491295f70 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/AbstractTestParquetDirect.java 3a476731e31bf38822f0d530f0aea2eadb675a49 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestArrayCompatibility.java d45d8eeb9e8a61f254098ab15d0305fc71152abd 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 8f03c5b403332f7b36b2271a2246a0fc90b3bfba 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapStructures.java 3c7401ffbe88ce66b96f9cceab4e9c3d6267f8fe 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetInputFormat.java 1a54bf5797efd5859c9e665bcc7134168e5d193f 
>   serde/src/java/org/apache/hadoop/hive/serde2/io/HiveArrayWritable.java PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/31800/diff/
> 
> 
> Testing
> -------
> 
> Some performance tests were done to validate this.
> 
> Schema: int,double,boolean,string,array<int>,map<string,string>,struct<a:int,b:int>
>   
> - JMH (Microbenchmarks) calls on parquet reads.
>   
>   Before: 579 ops/s
>   After:  651 ops/s
> 
> - YourKit Java Profiler to measure memory objects recorded.
>   Reading 20,000 random rows (10 times)
>   
>   Before:
>      Objects recorded:   1,863,610
>      Objects size:       42,373,808
>      Total memory usage: 29%
>      
>   After:
>      Objects recorded:   1,596,804
>      Objects size:       34,192,832
>      Total memory usage: 24%
> 
> All tests were run multiple times to get same results.
> 
> 
> Thanks,
> 
> Sergio Pena
> 
>


Re: Review Request 31800: HIVE-9658 Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

Posted by Dong Chen <do...@intel.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31800/#review75664
-----------------------------------------------------------


LGTM! Thanks Sergio!


ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java
<https://reviews.apache.org/r/31800/#comment122888>

    How about rename the class to "HiveArrayWritableObjectInspector", since we use the "HiveArrayWritable" here.



ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetByteInspector.java
<https://reviews.apache.org/r/31800/#comment122889>

    Delete "((IntWritable) o).get();" in comment.


- Dong Chen


On March 6, 2015, 5:18 p.m., Sergio Pena wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31800/
> -----------------------------------------------------------
> 
> (Updated March 6, 2015, 5:18 p.m.)
> 
> 
> Review request for hive, Ryan Blue and cheng xu.
> 
> 
> Bugs: HIVE-9658
>     https://issues.apache.org/jira/browse/HIVE-9658
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> This patch bypasses primitive java objects to hive object inspectors without using primitive Writable objects.
> It helps to reduce memory usage.
> 
> I did not bypass other complex objects, such as binaries, decimal and date/timestamp, because their Writable objects are needed in other parts of the code,
> and creating them later takes more ops/s to do it. Better save time at the beginning.
> 
> 
> Diffs
> -----
> 
>   itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java 4f6985cd13017ce37f4f0c100b16a27aa5b02f8b 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java c915f728fc9b27da0fabefab5d8f5faa53640b78 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetInputFormat.java 0391229723cc3ecef551fa44b8456b0d2ac93fb5 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/VectorizedParquetInputFormat.java d7edd52614771857d1b21971a66894841c248ef9 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ConverterParent.java 6ff6b473c9f1867bc14bb597094ddb92487cc954 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java a43661eb54ba29692c07c264584b5aecf648ef99 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 377e362979156b8d52d103192b22bd7f19fa683b 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveCollectionConverter.java f1c8b6f13718b37f590263e5b35ed6c327f5cf4f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java c6d03a19029d5bcc86b998dd7a8609973648c103 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveStructConverter.java f95d15eddc21bc432fa53572de5756751a13341a 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/Repeated.java ee57b31dac53d99af0c5a520f51102796ca32fd3 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java 47cd68200d3be9260aa35385d0dade74d7dc215d 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java 6dc85faecabd59dfc616e908926c1f6b6db372de 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java 49bf1c5325833993f4c09efdf1546af560783c28 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java bb066afd38aea6b2eb119b0f8ec8d00af57dc187 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/DeepParquetHiveMapInspector.java 143d72e76502d4877e8208181d9743259051dcea 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java bde0dcbb3978ba47b15ae2c9bbe2f87ed3984ab1 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 7fd5e9612d4e3c9bf3b816bc48dbdbe59fb8a5a8 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/StandardParquetHiveMapInspector.java 22250b30a14d52907fb22d4f44b93c7633c6a89e 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetByteInspector.java 864f56292fa4856df155f546064e4a6732cc663f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetShortInspector.java 39f265777c7e164382117e3902c3b6e491295f70 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/AbstractTestParquetDirect.java 3a476731e31bf38822f0d530f0aea2eadb675a49 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestArrayCompatibility.java d45d8eeb9e8a61f254098ab15d0305fc71152abd 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 8f03c5b403332f7b36b2271a2246a0fc90b3bfba 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapStructures.java 3c7401ffbe88ce66b96f9cceab4e9c3d6267f8fe 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetInputFormat.java 1a54bf5797efd5859c9e665bcc7134168e5d193f 
>   serde/src/java/org/apache/hadoop/hive/serde2/io/HiveArrayWritable.java PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/31800/diff/
> 
> 
> Testing
> -------
> 
> Some performance tests were done to validate this.
> 
> Schema: int,double,boolean,string,array<int>,map<string,string>,struct<a:int,b:int>
>   
> - JMH (Microbenchmarks) calls on parquet reads.
>   
>   Before: 579 ops/s
>   After:  651 ops/s
> 
> - YourKit Java Profiler to measure memory objects recorded.
>   Reading 20,000 random rows (10 times)
>   
>   Before:
>      Objects recorded:   1,863,610
>      Objects size:       42,373,808
>      Total memory usage: 29%
>      
>   After:
>      Objects recorded:   1,596,804
>      Objects size:       34,192,832
>      Total memory usage: 24%
> 
> All tests were run multiple times to get same results.
> 
> 
> Thanks,
> 
> Sergio Pena
> 
>


Re: Review Request 31800: HIVE-9658 Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

Posted by Sergio Pena <se...@cloudera.com>.

> On March 7, 2015, 12:51 a.m., Ryan Blue wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java, line 514
> > <https://reviews.apache.org/r/31800/diff/1/?file=887384#file887384line514>
> >
> >     If the incoming objects aren't necessarily writables, why doesn't this require cases for `values[i] instanceof Double`, `Boolean`, `Float`, `Integer`, and `Long`?

Thanks Ryan.
I did not see this.


> On March 7, 2015, 12:51 a.m., Ryan Blue wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/Repeated.java, line 137
> > <https://reviews.apache.org/r/31800/diff/1/?file=887393#file887393line137>
> >
> >     Is this not used?

It is not used anywhere on Repeated.java. The variable is private, and it was only assigned but not used.


- Sergio


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31800/#review75583
-----------------------------------------------------------


On March 6, 2015, 5:18 p.m., Sergio Pena wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31800/
> -----------------------------------------------------------
> 
> (Updated March 6, 2015, 5:18 p.m.)
> 
> 
> Review request for hive, Ryan Blue and cheng xu.
> 
> 
> Bugs: HIVE-9658
>     https://issues.apache.org/jira/browse/HIVE-9658
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> This patch bypasses primitive java objects to hive object inspectors without using primitive Writable objects.
> It helps to reduce memory usage.
> 
> I did not bypass other complex objects, such as binaries, decimal and date/timestamp, because their Writable objects are needed in other parts of the code,
> and creating them later takes more ops/s to do it. Better save time at the beginning.
> 
> 
> Diffs
> -----
> 
>   itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java 4f6985cd13017ce37f4f0c100b16a27aa5b02f8b 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java c915f728fc9b27da0fabefab5d8f5faa53640b78 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetInputFormat.java 0391229723cc3ecef551fa44b8456b0d2ac93fb5 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/VectorizedParquetInputFormat.java d7edd52614771857d1b21971a66894841c248ef9 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ConverterParent.java 6ff6b473c9f1867bc14bb597094ddb92487cc954 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java a43661eb54ba29692c07c264584b5aecf648ef99 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 377e362979156b8d52d103192b22bd7f19fa683b 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveCollectionConverter.java f1c8b6f13718b37f590263e5b35ed6c327f5cf4f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java c6d03a19029d5bcc86b998dd7a8609973648c103 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveStructConverter.java f95d15eddc21bc432fa53572de5756751a13341a 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/Repeated.java ee57b31dac53d99af0c5a520f51102796ca32fd3 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java 47cd68200d3be9260aa35385d0dade74d7dc215d 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java 6dc85faecabd59dfc616e908926c1f6b6db372de 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java 49bf1c5325833993f4c09efdf1546af560783c28 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java bb066afd38aea6b2eb119b0f8ec8d00af57dc187 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/DeepParquetHiveMapInspector.java 143d72e76502d4877e8208181d9743259051dcea 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java bde0dcbb3978ba47b15ae2c9bbe2f87ed3984ab1 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 7fd5e9612d4e3c9bf3b816bc48dbdbe59fb8a5a8 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/StandardParquetHiveMapInspector.java 22250b30a14d52907fb22d4f44b93c7633c6a89e 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetByteInspector.java 864f56292fa4856df155f546064e4a6732cc663f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetShortInspector.java 39f265777c7e164382117e3902c3b6e491295f70 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/AbstractTestParquetDirect.java 3a476731e31bf38822f0d530f0aea2eadb675a49 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestArrayCompatibility.java d45d8eeb9e8a61f254098ab15d0305fc71152abd 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 8f03c5b403332f7b36b2271a2246a0fc90b3bfba 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapStructures.java 3c7401ffbe88ce66b96f9cceab4e9c3d6267f8fe 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetInputFormat.java 1a54bf5797efd5859c9e665bcc7134168e5d193f 
>   serde/src/java/org/apache/hadoop/hive/serde2/io/HiveArrayWritable.java PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/31800/diff/
> 
> 
> Testing
> -------
> 
> Some performance tests were done to validate this.
> 
> Schema: int,double,boolean,string,array<int>,map<string,string>,struct<a:int,b:int>
>   
> - JMH (Microbenchmarks) calls on parquet reads.
>   
>   Before: 579 ops/s
>   After:  651 ops/s
> 
> - YourKit Java Profiler to measure memory objects recorded.
>   Reading 20,000 random rows (10 times)
>   
>   Before:
>      Objects recorded:   1,863,610
>      Objects size:       42,373,808
>      Total memory usage: 29%
>      
>   After:
>      Objects recorded:   1,596,804
>      Objects size:       34,192,832
>      Total memory usage: 24%
> 
> All tests were run multiple times to get same results.
> 
> 
> Thanks,
> 
> Sergio Pena
> 
>


Re: Review Request 31800: HIVE-9658 Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

Posted by Ryan Blue <bl...@apache.org>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31800/#review75583
-----------------------------------------------------------


I just have one major question in the vectorization code. Otherwise this looks great! Thanks Sergio!


ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java
<https://reviews.apache.org/r/31800/#comment122781>

    If the incoming objects aren't necessarily writables, why doesn't this require cases for `values[i] instanceof Double`, `Boolean`, `Float`, `Integer`, and `Long`?



ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/Repeated.java
<https://reviews.apache.org/r/31800/#comment122773>

    Is this not used?



ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java
<https://reviews.apache.org/r/31800/#comment122776>

    Should this be named `arrObjects`?


- Ryan Blue


On March 6, 2015, 9:18 a.m., Sergio Pena wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31800/
> -----------------------------------------------------------
> 
> (Updated March 6, 2015, 9:18 a.m.)
> 
> 
> Review request for hive, Ryan Blue and cheng xu.
> 
> 
> Bugs: HIVE-9658
>     https://issues.apache.org/jira/browse/HIVE-9658
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> This patch bypasses primitive java objects to hive object inspectors without using primitive Writable objects.
> It helps to reduce memory usage.
> 
> I did not bypass other complex objects, such as binaries, decimal and date/timestamp, because their Writable objects are needed in other parts of the code,
> and creating them later takes more ops/s to do it. Better save time at the beginning.
> 
> 
> Diffs
> -----
> 
>   itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java 4f6985cd13017ce37f4f0c100b16a27aa5b02f8b 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java c915f728fc9b27da0fabefab5d8f5faa53640b78 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetInputFormat.java 0391229723cc3ecef551fa44b8456b0d2ac93fb5 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/VectorizedParquetInputFormat.java d7edd52614771857d1b21971a66894841c248ef9 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ConverterParent.java 6ff6b473c9f1867bc14bb597094ddb92487cc954 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java a43661eb54ba29692c07c264584b5aecf648ef99 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 377e362979156b8d52d103192b22bd7f19fa683b 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveCollectionConverter.java f1c8b6f13718b37f590263e5b35ed6c327f5cf4f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java c6d03a19029d5bcc86b998dd7a8609973648c103 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveStructConverter.java f95d15eddc21bc432fa53572de5756751a13341a 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/Repeated.java ee57b31dac53d99af0c5a520f51102796ca32fd3 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java 47cd68200d3be9260aa35385d0dade74d7dc215d 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java 6dc85faecabd59dfc616e908926c1f6b6db372de 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java 49bf1c5325833993f4c09efdf1546af560783c28 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java bb066afd38aea6b2eb119b0f8ec8d00af57dc187 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/DeepParquetHiveMapInspector.java 143d72e76502d4877e8208181d9743259051dcea 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java bde0dcbb3978ba47b15ae2c9bbe2f87ed3984ab1 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 7fd5e9612d4e3c9bf3b816bc48dbdbe59fb8a5a8 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/StandardParquetHiveMapInspector.java 22250b30a14d52907fb22d4f44b93c7633c6a89e 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetByteInspector.java 864f56292fa4856df155f546064e4a6732cc663f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetShortInspector.java 39f265777c7e164382117e3902c3b6e491295f70 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/AbstractTestParquetDirect.java 3a476731e31bf38822f0d530f0aea2eadb675a49 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestArrayCompatibility.java d45d8eeb9e8a61f254098ab15d0305fc71152abd 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 8f03c5b403332f7b36b2271a2246a0fc90b3bfba 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapStructures.java 3c7401ffbe88ce66b96f9cceab4e9c3d6267f8fe 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetInputFormat.java 1a54bf5797efd5859c9e665bcc7134168e5d193f 
>   serde/src/java/org/apache/hadoop/hive/serde2/io/HiveArrayWritable.java PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/31800/diff/
> 
> 
> Testing
> -------
> 
> Some performance tests were done to validate this.
> 
> Schema: int,double,boolean,string,array<int>,map<string,string>,struct<a:int,b:int>
>   
> - JMH (Microbenchmarks) calls on parquet reads.
>   
>   Before: 579 ops/s
>   After:  651 ops/s
> 
> - YourKit Java Profiler to measure memory objects recorded.
>   Reading 20,000 random rows (10 times)
>   
>   Before:
>      Objects recorded:   1,863,610
>      Objects size:       42,373,808
>      Total memory usage: 29%
>      
>   After:
>      Objects recorded:   1,596,804
>      Objects size:       34,192,832
>      Total memory usage: 24%
> 
> All tests were run multiple times to get same results.
> 
> 
> Thanks,
> 
> Sergio Pena
> 
>


Re: Review Request 31800: HIVE-9658 Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

Posted by Sergio Pena <se...@cloudera.com>.

> On March 9, 2015, 2:06 a.m., cheng xu wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java, line 542
> > <https://reviews.apache.org/r/31800/diff/1/?file=887384#file887384line542>
> >
> >     The origin method is using Writable objects and will throw exception if not a instance of writable. Maybe we should add some *TODO* comments here for further developing works.

Thanks Ferd.
I forgot to check for Int,Long,Float,Double and Boolean in this method.


- Sergio


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31800/#review75648
-----------------------------------------------------------


On March 6, 2015, 5:18 p.m., Sergio Pena wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31800/
> -----------------------------------------------------------
> 
> (Updated March 6, 2015, 5:18 p.m.)
> 
> 
> Review request for hive, Ryan Blue and cheng xu.
> 
> 
> Bugs: HIVE-9658
>     https://issues.apache.org/jira/browse/HIVE-9658
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> This patch bypasses primitive java objects to hive object inspectors without using primitive Writable objects.
> It helps to reduce memory usage.
> 
> I did not bypass other complex objects, such as binaries, decimal and date/timestamp, because their Writable objects are needed in other parts of the code,
> and creating them later takes more ops/s to do it. Better save time at the beginning.
> 
> 
> Diffs
> -----
> 
>   itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java 4f6985cd13017ce37f4f0c100b16a27aa5b02f8b 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java c915f728fc9b27da0fabefab5d8f5faa53640b78 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetInputFormat.java 0391229723cc3ecef551fa44b8456b0d2ac93fb5 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/VectorizedParquetInputFormat.java d7edd52614771857d1b21971a66894841c248ef9 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ConverterParent.java 6ff6b473c9f1867bc14bb597094ddb92487cc954 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java a43661eb54ba29692c07c264584b5aecf648ef99 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 377e362979156b8d52d103192b22bd7f19fa683b 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveCollectionConverter.java f1c8b6f13718b37f590263e5b35ed6c327f5cf4f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java c6d03a19029d5bcc86b998dd7a8609973648c103 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveStructConverter.java f95d15eddc21bc432fa53572de5756751a13341a 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/Repeated.java ee57b31dac53d99af0c5a520f51102796ca32fd3 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java 47cd68200d3be9260aa35385d0dade74d7dc215d 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java 6dc85faecabd59dfc616e908926c1f6b6db372de 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java 49bf1c5325833993f4c09efdf1546af560783c28 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java bb066afd38aea6b2eb119b0f8ec8d00af57dc187 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/DeepParquetHiveMapInspector.java 143d72e76502d4877e8208181d9743259051dcea 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java bde0dcbb3978ba47b15ae2c9bbe2f87ed3984ab1 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 7fd5e9612d4e3c9bf3b816bc48dbdbe59fb8a5a8 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/StandardParquetHiveMapInspector.java 22250b30a14d52907fb22d4f44b93c7633c6a89e 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetByteInspector.java 864f56292fa4856df155f546064e4a6732cc663f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetShortInspector.java 39f265777c7e164382117e3902c3b6e491295f70 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/AbstractTestParquetDirect.java 3a476731e31bf38822f0d530f0aea2eadb675a49 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestArrayCompatibility.java d45d8eeb9e8a61f254098ab15d0305fc71152abd 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 8f03c5b403332f7b36b2271a2246a0fc90b3bfba 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapStructures.java 3c7401ffbe88ce66b96f9cceab4e9c3d6267f8fe 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetInputFormat.java 1a54bf5797efd5859c9e665bcc7134168e5d193f 
>   serde/src/java/org/apache/hadoop/hive/serde2/io/HiveArrayWritable.java PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/31800/diff/
> 
> 
> Testing
> -------
> 
> Some performance tests were done to validate this.
> 
> Schema: int,double,boolean,string,array<int>,map<string,string>,struct<a:int,b:int>
>   
> - JMH (Microbenchmarks) calls on parquet reads.
>   
>   Before: 579 ops/s
>   After:  651 ops/s
> 
> - YourKit Java Profiler to measure memory objects recorded.
>   Reading 20,000 random rows (10 times)
>   
>   Before:
>      Objects recorded:   1,863,610
>      Objects size:       42,373,808
>      Total memory usage: 29%
>      
>   After:
>      Objects recorded:   1,596,804
>      Objects size:       34,192,832
>      Total memory usage: 24%
> 
> All tests were run multiple times to get same results.
> 
> 
> Thanks,
> 
> Sergio Pena
> 
>


Re: Review Request 31800: HIVE-9658 Reduce parquet memory use by bypassing java primitive objects on ETypeConverter

Posted by cheng xu <ch...@intel.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31800/#review75648
-----------------------------------------------------------


Awesome work! Just two small suggestions.


ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java
<https://reviews.apache.org/r/31800/#comment122856>

    The origin method is using Writable objects and will throw exception if not a instance of writable. Maybe we should add some *TODO* comments here for further developing works.



serde/src/java/org/apache/hadoop/hive/serde2/io/HiveArrayWritable.java
<https://reviews.apache.org/r/31800/#comment122857>

    Can we rename it to "ObjectArrayWritable" since it is trying to wrap the object array.


- cheng xu


On March 6, 2015, 5:18 p.m., Sergio Pena wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31800/
> -----------------------------------------------------------
> 
> (Updated March 6, 2015, 5:18 p.m.)
> 
> 
> Review request for hive, Ryan Blue and cheng xu.
> 
> 
> Bugs: HIVE-9658
>     https://issues.apache.org/jira/browse/HIVE-9658
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> This patch bypasses primitive java objects to hive object inspectors without using primitive Writable objects.
> It helps to reduce memory usage.
> 
> I did not bypass other complex objects, such as binaries, decimal and date/timestamp, because their Writable objects are needed in other parts of the code,
> and creating them later takes more ops/s to do it. Better save time at the beginning.
> 
> 
> Diffs
> -----
> 
>   itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java 4f6985cd13017ce37f4f0c100b16a27aa5b02f8b 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java c915f728fc9b27da0fabefab5d8f5faa53640b78 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/MapredParquetInputFormat.java 0391229723cc3ecef551fa44b8456b0d2ac93fb5 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/VectorizedParquetInputFormat.java d7edd52614771857d1b21971a66894841c248ef9 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ConverterParent.java 6ff6b473c9f1867bc14bb597094ddb92487cc954 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/DataWritableRecordConverter.java a43661eb54ba29692c07c264584b5aecf648ef99 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 377e362979156b8d52d103192b22bd7f19fa683b 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveCollectionConverter.java f1c8b6f13718b37f590263e5b35ed6c327f5cf4f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveGroupConverter.java c6d03a19029d5bcc86b998dd7a8609973648c103 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveStructConverter.java f95d15eddc21bc432fa53572de5756751a13341a 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/Repeated.java ee57b31dac53d99af0c5a520f51102796ca32fd3 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java 47cd68200d3be9260aa35385d0dade74d7dc215d 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java 6dc85faecabd59dfc616e908926c1f6b6db372de 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/AbstractParquetMapInspector.java 49bf1c5325833993f4c09efdf1546af560783c28 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java bb066afd38aea6b2eb119b0f8ec8d00af57dc187 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/DeepParquetHiveMapInspector.java 143d72e76502d4877e8208181d9743259051dcea 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveArrayInspector.java bde0dcbb3978ba47b15ae2c9bbe2f87ed3984ab1 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 7fd5e9612d4e3c9bf3b816bc48dbdbe59fb8a5a8 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/StandardParquetHiveMapInspector.java 22250b30a14d52907fb22d4f44b93c7633c6a89e 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetByteInspector.java 864f56292fa4856df155f546064e4a6732cc663f 
>   ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/primitive/ParquetShortInspector.java 39f265777c7e164382117e3902c3b6e491295f70 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/AbstractTestParquetDirect.java 3a476731e31bf38822f0d530f0aea2eadb675a49 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestArrayCompatibility.java d45d8eeb9e8a61f254098ab15d0305fc71152abd 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestDataWritableWriter.java 8f03c5b403332f7b36b2271a2246a0fc90b3bfba 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapStructures.java 3c7401ffbe88ce66b96f9cceab4e9c3d6267f8fe 
>   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestMapredParquetInputFormat.java 1a54bf5797efd5859c9e665bcc7134168e5d193f 
>   serde/src/java/org/apache/hadoop/hive/serde2/io/HiveArrayWritable.java PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/31800/diff/
> 
> 
> Testing
> -------
> 
> Some performance tests were done to validate this.
> 
> Schema: int,double,boolean,string,array<int>,map<string,string>,struct<a:int,b:int>
>   
> - JMH (Microbenchmarks) calls on parquet reads.
>   
>   Before: 579 ops/s
>   After:  651 ops/s
> 
> - YourKit Java Profiler to measure memory objects recorded.
>   Reading 20,000 random rows (10 times)
>   
>   Before:
>      Objects recorded:   1,863,610
>      Objects size:       42,373,808
>      Total memory usage: 29%
>      
>   After:
>      Objects recorded:   1,596,804
>      Objects size:       34,192,832
>      Total memory usage: 24%
> 
> All tests were run multiple times to get same results.
> 
> 
> Thanks,
> 
> Sergio Pena
> 
>