You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by kiszk <gi...@git.apache.org> on 2018/01/25 17:52:39 UTC

[GitHub] spark pull request #20361: [SPARK-23188][SQL] Make vectorized columar reader...

Github user kiszk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20361#discussion_r163918684
  
    --- Diff: sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedParquetRecordReader.java ---
    @@ -115,13 +116,15 @@
        */
       private final MemoryMode MEMORY_MODE;
     
    -  public VectorizedParquetRecordReader(TimeZone convertTz, boolean useOffHeap) {
    +  public VectorizedParquetRecordReader(TimeZone convertTz, boolean useOffHeap, int capacity) {
         this.convertTz = convertTz;
         MEMORY_MODE = useOffHeap ? MemoryMode.OFF_HEAP : MemoryMode.ON_HEAP;
    +    this.capacity = capacity;
       }
     
    +  // Vectorized parquet reader used for testing and benchmark.
       public VectorizedParquetRecordReader(boolean useOffHeap) {
    -    this(null, useOffHeap);
    +    this(null, useOffHeap, 4096);
    --- End diff --
    
    How about changing benchmark and test programs to pass capacity and remove this constructor?
    These programs also have `SQLConf`.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org